text string | source string |
|---|---|
We prove K-stability of every smooth member of the family 2.15 of the
Mukai-Mori classification. | http://arxiv.org/abs/2304.11420v1 |
Computing strongly connected components (SCC) is a fundamental problems in
graph processing. As today's real-world graphs are getting larger and larger,
parallel SCC is increasingly important. SCC is challenging in the parallel
setting and is particularly hard on large-diameter graphs. Many existing
parallel SCC implementations can be even slower than Tarjan's sequential
algorithm on large-diameter graphs.
To tackle this challenge, we propose an efficient parallel SCC implementation
using a new parallel reachability algorithm. Our solution is based on a novel
idea referred to as vertical granularity control (VGC). It breaks the
synchronization barriers to increase parallelism and hide scheduling overhead.
To use VGC in our SCC algorithm, we also design an efficient data structure
called the \emph{parallel hash bag}. It uses parallel dynamic resizing to avoid
redundant work in maintaining frontiers (vertices processed in a round).
We implement the parallel SCC algorithm by Blelloch et al.\ (J.\ ACM, 2020)
using our new parallel reachability algorithm. We compare our implementation to
the state-of-the-art systems, including GBBS, iSpan, Multi-step, and our highly
optimized Tarjan's (sequential) algorithm, on 18 graphs, including social, web,
$k$-NN, and lattice graphs. On a machine with 96 cores, our implementation is
the fastest on 16 out of 18 graphs. On average (geometric means) over all
graphs, our SCC is 6.0$\times$ faster than the best previous parallel code
(GBBS), 12.8$\times$ faster than Tarjan's sequential algorithms, and
2.7$\times$ faster than the \emph{best existing implementation on each graph}.
We believe that our techniques are of independent interest. We also apply our
parallel hash bag and VGC scheme to other graph problems, including
connectivity and least-element lists (LE-lists). | http://arxiv.org/abs/2303.04934v2 |
The past decade has witnessed a plethora of works that leverage the power of
visualization (VIS) to interpret machine learning (ML) models. The
corresponding research topic, VIS4ML, keeps growing at a fast pace. To better
organize the enormous works and shed light on the developing trend of VIS4ML,
we provide a systematic review of these works through this survey. Since data
quality greatly impacts the performance of ML models, our survey focuses
specifically on summarizing VIS4ML works from the data perspective. First, we
categorize the common data handled by ML models into five types, explain the
unique features of each type, and highlight the corresponding ML models that
are good at learning from them. Second, from the large number of VIS4ML works,
we tease out six tasks that operate on these types of data (i.e., data-centric
tasks) at different stages of the ML pipeline to understand, diagnose, and
refine ML models. Lastly, by studying the distribution of 143 surveyed papers
across the five data types, six data-centric tasks, and their intersections, we
analyze the prospective research directions and envision future research
trends. | http://arxiv.org/abs/2307.07712v1 |
The diabatic framework generalizes the adiabatic approximation built into the
Born-Oppenheimer (BO) formalism, and is devised to rigorously incorporate the
mixing of BO-approximation eigenstates with two-particle thresholds. We
recently applied this framework in a bound-state approximation to the mixing of
hidden-charm dynamical-diquark tetraquark states with open-charm di-meson
thresholds. Since almost all of these states are observed as above-threshold
resonances, we here implement the corresponding scattering formalism to allow
for a study of exotic tetraquark resonances within the diabatic framework. We
calculate elastic open-charm di-meson cross sections (in channels with zero,
open, and hidden strangeness) as functions of center-of-mass energy, and
observe the development of true resonances, near resonances, and various
threshold cusp effects. As an example, $\chi_{c1}(3872)$ can originate in the
$1^{++}$ channel as a diquark-antidiquark state enhanced by the $D^0
\overline{D}^{*0}$ threshold, with or without an additional contribution from
the conventional charmonium $\chi_{c1}(2P)$ state. | http://arxiv.org/abs/2305.09146v2 |
We consider a cost sharing problem on a weighted undirected graph, where all
the nodes want to connect to a special node called source, and they need to
share the total cost (weights) of the used edges. Each node except for the
source has a private valuation of the connection, and it may block others'
connections by strategically cutting its adjacent edges to reduce its cost
share, which may increase the total cost. We aim to design mechanisms to
prevent the nodes from misreporting their valuations and cutting their adjacent
edges. We first show that it is impossible for such a mechanism to further
satisfy budget balance (cover the total cost) and efficiency (maximize social
welfare). Then, we design two feasible cost sharing mechanisms that incentivize
each node to offer all its adjacent edges and truthfully report its valuation,
and also satisfy either budget balance or efficiency. | http://arxiv.org/abs/2303.03083v1 |
Pre-trained transformer language models (LMs) have in recent years become the
dominant paradigm in applied NLP. These models have achieved state-of-the-art
performance on tasks such as information extraction, question answering,
sentiment analysis, document classification and many others. In the biomedical
domain, significant progress has been made in adapting this paradigm to NLP
tasks that require the integration of domain-specific knowledge as well as
statistical modelling of language. In particular, research in this area has
focused on the question of how best to construct LMs that take into account not
only the patterns of token distribution in medical text, but also the wealth of
structured information contained in terminology resources such as the UMLS.
This work contributes a data-centric paradigm for enriching the language
representations of biomedical transformer-encoder LMs by extracting text
sequences from the UMLS. This allows for graph-based learning objectives to be
combined with masked-language pre-training. Preliminary results from
experiments in the extension of pre-trained LMs as well as training from
scratch show that this framework improves downstream performance on multiple
biomedical and clinical Named Entity Recognition (NER) tasks. | http://arxiv.org/abs/2307.11170v1 |
This paper presents a novel solution for UAV control in cooperative
multi-robot systems, which can be used in various scenarios such as
leader-following, landing on a moving base, or specific relative motion with a
target. Unlike classical methods that tackle UAV control in the world frame, we
directly control the UAV in the target coordinate frame, without making motion
assumptions about the target. In detail, we formulate a non-linear model
predictive controller of a UAV, referred to as the agent, within a non-inertial
frame (i.e., the target frame). The system requires the relative states (pose
and velocity), the angular velocity and the accelerations of the target, which
can be obtained by relative localization methods and ubiquitous MEMS IMU
sensors, respectively. This framework eliminates dependencies that are vital in
classical solutions, such as accurate state estimation for both the agent and
target, prior knowledge of the target motion model, and continuous trajectory
re-planning for some complex tasks. We have performed extensive simulations to
investigate the control performance with varying motion characteristics of the
target. Furthermore, we conducted real robot experiments, employing either
simulated relative pose estimation from motion capture systems indoors or
directly from our previous relative pose estimation devices outdoors, to
validate the applicability and feasibility of the proposed approach. | http://arxiv.org/abs/2306.11259v2 |
Deep point cloud registration methods face challenges to partial overlaps and
rely on labeled data. To address these issues, we propose UDPReg, an
unsupervised deep probabilistic registration framework for point clouds with
partial overlaps. Specifically, we first adopt a network to learn posterior
probability distributions of Gaussian mixture models (GMMs) from point clouds.
To handle partial point cloud registration, we apply the Sinkhorn algorithm to
predict the distribution-level correspondences under the constraint of the
mixing weights of GMMs. To enable unsupervised learning, we design three
distribution consistency-based losses: self-consistency, cross-consistency, and
local contrastive. The self-consistency loss is formulated by encouraging GMMs
in Euclidean and feature spaces to share identical posterior distributions. The
cross-consistency loss derives from the fact that the points of two partially
overlapping point clouds belonging to the same clusters share the cluster
centroids. The cross-consistency loss allows the network to flexibly learn a
transformation-invariant posterior distribution of two aligned point clouds.
The local contrastive loss facilitates the network to extract discriminative
local features. Our UDPReg achieves competitive performance on the
3DMatch/3DLoMatch and ModelNet/ModelLoNet benchmarks. | http://arxiv.org/abs/2303.13290v1 |
Compositional and domain generalization present significant challenges in
semantic parsing, even for state-of-the-art semantic parsers based on
pre-trained language models (LMs). In this study, we empirically investigate
improving an LM's generalization in semantic parsing with two simple
techniques: at the token level, we introduce a token preprocessing method to
preserve the semantic boundaries of tokens produced by LM tokenizers; at the
sequence level, we propose to use special tokens to mark the boundaries of
components aligned between input and output. Our experimental results on two
text-to-SQL semantic parsing datasets show that our token preprocessing,
although simple, can substantially improve the LM performance on both types of
generalization, and our component boundary marking method is particularly
helpful for compositional generalization. | http://arxiv.org/abs/2305.17378v1 |
The B[e] phenomenon is manifested by a heterogeneous group of stars
surrounded by gaseous and dusty circumstellar envelopes with similar physical
conditions. Among these stars, the FS CMa-type objects are suspected to be
binary systems, which could be experiencing or have undergone a mass-transfer
process that could explain the large amount of material surrounding them. We
aim to contribute to the knowledge of a recently confirmed binary, MWC 645,
which could be undergoing an active mass-transfer process. We present
near-infrared and optical spectra, identify atomic and molecular spectral
features, and derive different quantitative properties of line profiles. Based
on publicly available photometric data, we search for periodicity in the light
curve and model the spectral energy distribution. We have detected molecular
bands of CO in absorption at 1.62 $\mu$m and 2.3 $\mu$m for the first time. We
derive an upper limit for the effective temperature of the cool binary
component. We found a correlation between the enhancement of the H$\alpha$
emission and the decrease in optical brightness that could be associated with
mass-ejection events or an increase in mass loss. We outline the global
properties of the envelope, possibly responsible for brightness variations due
to a variable extinction, and briefly speculate on different possible
scenarios. | http://arxiv.org/abs/2306.16536v1 |
We propose a geometric integrator to numerically approximate the flow of Lie
systems. The key is a novel procedure that integrates the Lie system on a Lie
group intrinsically associated with a Lie system on a general manifold via a
Lie group action, and then generates the discrete solution of the Lie system on
the manifold via a solution of the Lie system on the Lie group. One major
result from the integration of a Lie system on a Lie group is that one is able
to solve all associated Lie systems on manifolds at the same time, and that Lie
systems on Lie groups can be described through first-order systems of linear
homogeneous ordinary differential equations (ODEs) in normal form. This brings
a lot of advantages, since solving a linear system of ODEs involves less
numerical cost. Specifically, we use two families of numerical schemes on the
Lie group, which are designed to preserve its geometrical structure: the first
one based on the Magnus expansion, whereas the second is based on
Runge-Kutta-Munthe-Kaas (RKMK) methods. Moreover, since the aforementioned
action relates the Lie group and the manifold where the Lie system evolves, the
resulting integrator preserves any geometric structure of the latter. We
compare both methods for Lie systems with geometric invariants, particularly a
class on Lie systems on curved spaces. We also illustrate the superiority of
our method for describing long-term behavior and for differential equations
admitting solutions whose geometric features depends heavily on initial
conditions. As already mentioned, our milestone is to show that the method we
propose preserves all the geometric invariants very faithfully, in comparison
with nongeometric numerical methods. | http://arxiv.org/abs/2308.00820v2 |
Reconfigurable intelligent surfaces (RISs) are widely considered a promising
technology for future wireless communication systems. As an important indicator
of RIS-assisted communication systems in green wireless communications, energy
efficiency (EE) has recently received intensive research interest as an
optimization target. However, most previous works have ignored the different
power consumption between ON and OFF states of the PIN diodes attached to each
RIS element. This oversight results in extensive unnecessary power consumption
and reduction of actual EE due to the inaccurate power model. To address this
issue, in this paper, we first utilize a practical power model for a
RIS-assisted multi-user multiple-input single-output (MU-MISO) communication
system, which takes into account the difference in power dissipation caused by
ON-OFF states of RIS's PIN diodes. Based on this model, we formulate a more
accurate EE optimization problem. However, this problem is non-convex and has
mixed-integer properties, which poses a challenge for optimization. To solve
the problem, an effective alternating optimization (AO) algorithm framework is
utilized to optimize the base station and RIS beamforming precoder separately.
To obtain the essential RIS beamforming precoder, we develop two effective
methods based on maximum gradient search and SDP relaxation respectively.
Theoretical analysis shows the exponential complexity of the original problem
has been reduced to polynomial complexity. Simulation results demonstrate that
the proposed algorithm outperforms the existing ones, leading to a significant
increase in EE across a diverse set of scenarios. | http://arxiv.org/abs/2310.15901v1 |
The main result of the paper is the Fibonacci-like property of the partition
function. The partition function $p(n)$ has a property: $p(n) \leq p(n-1) +
p(n-2)$. Our result shows that if we impose certain restrictions on the
partition, then the inequality becomes an equality. Furthermore, we extend this
result to cases with a greater number of summands. | http://arxiv.org/abs/2308.06289v1 |
Machine learning in quantum computing and communication provides intensive
opportunities for revolutionizing the field of Physics, Mathematics, and
Computer Science. There exists an aperture of understanding behind this
interdisciplinary domain and a lack of core understanding renders an
opportunity to explore the machine learning techniques for this domain. This
paper gives a comprehensive review of state-of-the-art approaches in quantum
computing and quantum communication in the context of Artificial Intelligence
and machine learning models. The paper reviews the classical ML models that
have been employed in various ways for quantum computation such as quantum
error correction, quantum communication, quantum cryptography, and mapping
quantum algorithms to the existing hardware. The paper also illustrates how the
relevant current challenges can be transformed into future research avenues. | http://arxiv.org/abs/2310.03434v1 |
This paper presents the results of the first experiments on 4D tracking of a
single electron using a linear multi-anode photomultiplier tube. The reported
technology makes it is possible to fully track a single electron in a storage
ring, which requires tracking of amplitudes and phases for both, slow
synchrotron and fast betatron oscillations. Complete tracking of a point-like
object enabled the first direct measurements of single-particle dynamical
properties, including dynamical invariants, amplitude-dependent oscillation
frequencies, and chaotic behavior. | http://arxiv.org/abs/2307.06183v1 |
The primary aim of this research was to address the limitations observed in
the medical knowledge of prevalent large language models (LLMs) such as
ChatGPT, by creating a specialized language model with enhanced accuracy in
medical advice. We achieved this by adapting and refining the large language
model meta-AI (LLaMA) using a large dataset of 100,000 patient-doctor dialogues
sourced from a widely used online medical consultation platform. These
conversations were cleaned and anonymized to respect privacy concerns. In
addition to the model refinement, we incorporated a self-directed information
retrieval mechanism, allowing the model to access and utilize real-time
information from online sources like Wikipedia and data from curated offline
medical databases. The fine-tuning of the model with real-world patient-doctor
interactions significantly improved the model's ability to understand patient
needs and provide informed advice. By equipping the model with self-directed
information retrieval from reliable online and offline sources, we observed
substantial improvements in the accuracy of its responses. Our proposed
ChatDoctor, represents a significant advancement in medical LLMs, demonstrating
a significant improvement in understanding patient inquiries and providing
accurate advice. Given the high stakes and low error tolerance in the medical
field, such enhancements in providing accurate and reliable information are not
only beneficial but essential. | http://arxiv.org/abs/2303.14070v5 |
We propose MDSC(Music-Dance-Style Consistency), the first evaluation metric
that assesses to what degree the dance moves and music match. Existing metrics
can only evaluate the motion fidelity and diversity and the degree of rhythmic
matching between music and dance. MDSC measures how stylistically correlated
the generated dance motion sequences and the conditioning music sequences are.
We found that directly measuring the embedding distance between motion and
music is not an optimal solution. We instead tackle this through modeling it as
a clustering problem. Specifically, 1) we pre-train a music encoder and a
motion encoder, then 2) we learn to map and align the motion and music
embedding in joint space by jointly minimizing the intra-cluster distance and
maximizing the inter-cluster distance, and 3) for evaluation purposes, we
encode the dance moves into embedding and measure the intra-cluster and
inter-cluster distances, as well as the ratio between them. We evaluate our
metric on the results of several music-conditioned motion generation methods,
combined with user study, we found that our proposed metric is a robust
evaluation metric in measuring the music-dance style correlation. | http://arxiv.org/abs/2309.01340v3 |
This paper aims to investigate the effectiveness of the recently proposed
Boosted Difference of Convex functions Algorithm (BDCA) when applied to
clustering with constraints and set clustering with constraints problems. This
is the first paper to apply BDCA to a problem with nonlinear constraints. We
present the mathematical basis for the BDCA and Difference of Convex functions
Algorithm (DCA), along with a penalty method based on distance functions. We
then develop algorithms for solving these problems and computationally
implement them, with publicly available implementations. We compare old
examples and provide new experiments to test the algorithms. We find that the
BDCA method converges in fewer iterations than the corresponding DCA-based
method. In addition, BDCA yields faster CPU running-times in all tested
problems. | http://arxiv.org/abs/2310.14148v1 |
We have studied the lattice dynamics, electron-phonon coupling, and
superconducting properties of $\alpha$-MoB$_2$, as a function of applied
pressure, within the framework of density functional perturbation theory using
a mixed-basis pseudopotential method. We found that phonon modes located along
the A$-$H, H$-$L, and L$-$A high-symmetry paths exhibit large phonon linewidths
and contribute significantly to the electron-phonon coupling constant. Although
linewidths are particularly large for the highest-frequency optical phonon
modes (dominated by B vibrations), their contribution to the electron-phonon
coupling constant is marginal. The latter is largely controlled by the acoustic
low-frequency modes of predominantly Mo character. It was observed that at a
pressure of $90$~GPa, where $\alpha$-MoB$_2$ forms, the phonon-mediated pairing
falls into the strong-coupling regime, and the estimate for the superconducting
critical temperature $T_c$ agrees well with experimental observations. When
further increasing the applied pressure, a reduction of $T_c$ is predicted,
which correlates with a hardening of the acoustic low-frequency phonon modes
and a decrease of the electron-phonon coupling parameter. | http://arxiv.org/abs/2306.00803v2 |
Change detection (CD) methods have been applied to optical data for decades,
while the use of hyperspectral data with a fine spectral resolution has been
rarely explored. CD is applied in several sectors, such as environmental
monitoring and disaster management. Thanks to the PRecursore IperSpettrale
della Missione operativA (PRISMA), hyperspectral-from-space CD is now possible.
In this work, we apply standard and deep-learning (DL) CD methods to different
targets, from natural to urban areas. We propose a pipeline starting from
coregistration, followed by CD with a full-spectrum algorithm and by a DL
network developed for optical data. We find that changes in vegetation and
built environments are well captured. The spectral information is valuable to
identify subtle changes and the DL methods are less affected by noise compared
to the statistical method, but atmospheric effects and the lack of reliable
ground truth represent a major challenge to hyperspectral CD. | http://arxiv.org/abs/2310.13627v1 |
Modern large language models demonstrate impressive capabilities in text
generation and generalization. However, they often struggle with solving text
editing tasks, particularly when it comes to correcting spelling errors and
mistypings. In this paper, we present a methodology for generative spelling
correction (SC), which was tested on English and Russian languages and
potentially can be extended to any language with minor changes. Our research
mainly focuses on exploring natural spelling errors and mistypings in texts and
studying the ways those errors can be emulated in correct sentences to
effectively enrich generative models' pre-train procedure. We investigate the
impact of such emulations and the models' abilities across different text
domains. In this work, we investigate two spelling corruption techniques: 1)
first one mimics human behavior when making a mistake through leveraging
statistics of errors from particular dataset and 2) second adds the most common
spelling errors, keyboard miss clicks, and some heuristics within the texts. We
conducted experiments employing various corruption strategies, models'
architectures and sizes on the pre-training and fine-tuning stages and
evaluated the models using single-domain and multi-domain test sets. As a
practical outcome of our work, we introduce SAGE(Spell checking via
Augmentation and Generative distribution Emulation). It is a library for
automatic generative SC that includes a family of pre-trained generative models
and built-in augmentation algorithms. | http://arxiv.org/abs/2308.09435v2 |
Gig workers, and the products and services they provide, play an increasingly
ubiquitous role in our daily lives. But despite growing evidence suggesting
that worker well-being in gig economy platforms have become significant
societal problems, few studies have investigated possible solutions. We take a
stride in this direction by engaging workers, platform employees, and local
regulators in a series of speed dating workshops using storyboards based on
real-life situations to rapidly elicit stakeholder preferences for addressing
financial, physical, and social issues related to worker well-being. Our
results reveal that existing public and platformic infrastructures fall short
in providing workers with resources needed to perform gigs, surfacing a need
for multi-platform collaborations, technological innovations, as well as
changes in regulations, labor laws, and the public's perception of gig workers,
among others. Drawing from multi-stakeholder findings, we discuss these
implications for technology, policy, and service as well as avenues for
collaboration. | http://arxiv.org/abs/2302.13436v2 |
We introduce a causal framework for designing optimal policies that satisfy
fairness constraints. We take a pragmatic approach asking what we can do with
an action space available to us and only with access to historical data. We
propose two different fairness constraints: a moderation breaking constraint
which aims at blocking moderation paths from the action and sensitive attribute
to the outcome, and by that at reducing disparity in outcome levels as much as
the provided action space permits; and an equal benefit constraint which aims
at distributing gain from the new and maximized policy equally across sensitive
attribute levels, and thus at keeping pre-existing preferential treatment in
place or avoiding the introduction of new disparity. We introduce practical
methods for implementing the constraints and illustrate their uses on
experiments with semi-synthetic models. | http://arxiv.org/abs/2301.12278v1 |
Radioactive sources of the monoenergetic low-energy conversion electrons from
the decay of isomeric $^{83m}Kr$ are frequently used in the systematic
measurements, particularly in the neutrino mass and dark matter experiments.
For this purpose, the isomer is obtained by the decay of its parent
radionuclide $^{83}Rb$. In order to get more precise data on the gamma-rays
occuring in the $^{83}Rb$/$^{83m}Kr$ chain, we re-measured the relevant
gamma-ray spectra, because the previous measurement took place in 1976. The
obtained intensities are in fair agreement with this previous measurement. We
have, however, improved the uncertainties by a factor of 4.3, identified a new
gamma transition and determined more precisely energies of weaker gamma
transitions. | http://arxiv.org/abs/2302.05254v1 |
The development of Adaptive Cruise Control (ACC) systems aims to enhance the
safety and comfort of vehicles by automatically regulating the speed of the
vehicle to ensure a safe gap from the preceding vehicle. However, conventional
ACC systems are unable to adapt themselves to changing driving conditions and
drivers' behavior. To address this limitation, we propose a Long Short-Term
Memory (LSTM) based ACC system that can learn from past driving experiences and
adapt and predict new situations in real time. The model is constructed based
on the real-world highD dataset, acquired from German highways with the
assistance of camera-equipped drones. We evaluated the ACC system under
aggressive lane changes when the side lane preceding vehicle cut off, forcing
the targeted driver to reduce speed. To this end, the proposed system was
assessed on a simulated driving environment and compared with a feedforward
Artificial Neural Network (ANN) model and Model Predictive Control (MPC) model.
The results show that the LSTM-based system is 19.25% more accurate than the
ANN model and 5.9% more accurate than the MPC model in terms of predicting
future values of subject vehicle acceleration. The simulation is done in
Matlab/Simulink environment. | http://arxiv.org/abs/2305.01095v2 |
In this paper we study triharmonic hypersurfaces immersed in a space form
$N^{n+1}(c)$. We prove that any proper CMC triharmonic hypersurface in the
sphere $\mathbb S^{n+1}$ has constant scalar curvature; any CMC triharmonic
hypersurface in the hyperbolic space $\mathbb H^{n+1}$ is minimal. Moreover, we
show that any CMC triharmonic hypersurface in the Euclidean space $\mathbb
R^{n+1}$ is minimal provided that the multiplicity of the principal curvature
zero is at most one. In particular, we are able to prove that every CMC
triharmonic hypersurface in the Euclidean space $\mathbb R^{6}$ is
minimal.These results extend some recent works due to Montaldo-Oniciuc-Ratto
and Chen-Guan, and give affirmative answer to the generalized Chen's
conjecture. | http://arxiv.org/abs/2303.02612v1 |
We present the first simulation-based inference (SBI) of cosmological
parameters from field-level analysis of galaxy clustering. Standard galaxy
clustering analyses rely on analyzing summary statistics, such as the power
spectrum, $P_\ell$, with analytic models based on perturbation theory.
Consequently, they do not fully exploit the non-linear and non-Gaussian
features of the galaxy distribution. To address these limitations, we use the
{\sc SimBIG} forward modelling framework to perform SBI using normalizing
flows. We apply SimBIG to a subset of the BOSS CMASS galaxy sample using a
convolutional neural network with stochastic weight averaging to perform
massive data compression of the galaxy field. We infer constraints on $\Omega_m
= 0.267^{+0.033}_{-0.029}$ and $\sigma_8=0.762^{+0.036}_{-0.035}$. While our
constraints on $\Omega_m$ are in-line with standard $P_\ell$ analyses, those on
$\sigma_8$ are $2.65\times$ tighter. Our analysis also provides constraints on
the Hubble constant $H_0=64.5 \pm 3.8 \ {\rm km / s / Mpc}$ from galaxy
clustering alone. This higher constraining power comes from additional
non-Gaussian cosmological information, inaccessible with $P_\ell$. We
demonstrate the robustness of our analysis by showcasing our ability to infer
unbiased cosmological constraints from a series of test simulations that are
constructed using different forward models than the one used in our training
dataset. This work not only presents competitive cosmological constraints but
also introduces novel methods for leveraging additional cosmological
information in upcoming galaxy surveys like DESI, PFS, and Euclid. | http://arxiv.org/abs/2310.15256v1 |
In this paper, we want to derive achievable secrecy rate regions for quantum
interference channel with classical inputs under one-shot setting. The main
idea to this end is to use the combination of superposition and rate splitting
for encoding scheme and constructing a decoding scheme based on simultaneous
decoding. | http://arxiv.org/abs/2301.03375v1 |
Modern high-throughput sequencing assays efficiently capture not only gene
expression and different levels of gene regulation but also a multitude of
genome variants. Focused analysis of alternative alleles of variable sites at
homologous chromosomes of the human genome reveals allele-specific gene
expression and allele-specific gene regulation by assessing allelic imbalance
of read counts at individual sites. Here we formally describe an advanced
statistical framework for detecting the allelic imbalance in allelic read
counts at single-nucleotide variants detected in diverse omics studies
(ChIP-Seq, ATAC-Seq, DNase-Seq, CAGE-Seq, and others). MIXALIME accounts for
copy-number variants and aneuploidy, reference read mapping bias, and provides
several scoring models to balance between sensitivity and specificity when
scoring data with varying levels of experimental noise-caused overdispersion. | http://arxiv.org/abs/2306.08287v6 |
Galaxy clusters are the products of structure formation through myriad
physical processes that affect their growth and evolution throughout cosmic
history. As a result, the matter distribution within galaxy clusters, or their
shape, is influenced by cosmology and astrophysical processes, in particular
the accretion of new material due to gravity. We introduce an analysis method
to investigate the 3D triaxial shapes of galaxy clusters from the Cluster
HEritage project with XMM-Newton -- Mass Assembly and Thermodynamics at the
Endpoint of structure formation (CHEX-MATE). In this work, the first paper of a
CHEX-MATE triaxial analysis series, we focus on utilizing X-ray data from XMM
and Sunyaev-Zel'dovich (SZ) effect maps from Planck and ACT to obtain a three
dimensional triaxial description of the intracluster medium (ICM) gas. We
present the forward modeling formalism of our technique, which projects a
triaxial ellipsoidal model for the gas density and pressure to compare directly
with the observed two dimensional distributions in X-rays and the SZ effect. A
Markov chain Monte Carlo is used to estimate the posterior distributions of the
model parameters. Using mock X-ray and SZ observations of a smooth model, we
demonstrate that the method can reliably recover the true parameter values. In
addition, we apply the analysis to reconstruct the gas shape from the observed
data of one CHEX-MATE galaxy cluster, Abell 1689, to illustrate the technique.
The inferred parameters are in agreement with previous analyses for that
cluster, and our results indicate that the geometrical properties, including
the axial ratios of the ICM distribution, are constrained to within a few
percent. With much better precision than previous studies, we thus further
establish that Abell 1689 is significantly elongated along the line of sight,
resulting in its exceptional gravitational lensing properties. | http://arxiv.org/abs/2307.04794v2 |
Let the symmetric functions be defined for the pair of integers $\left(
n,r\right) $, $n\geq r\geq 1$, by $p_{n}^{\left( r\right) }=\sum m_{\lambda }$
where $m_{\lambda }$ are the monomial symmetric functions, the sum being over
the partitions $\lambda $ of the integer $n$ with length $r$. We introduce by a
generating function, a $q$-analog of $p_{n}^{\left( r\right) }$ and give some
of its properties. This $q$-analog is related to its the classical form using
the $q$-Stirling numbers. We also start with the same procedure the study of a
$p,q$-analog of $p_{n}^{\left( r\right) }$.
By specialization of this $q$-analog in the series $\sum\nolimits_{n=0}^{
\infty }q^{\binom{n}{2}}t^{n}/n!$, we recover in a purely formal way$\ $a class
of polynomials $J_{n}^{\left( r\right) }$ historically introduced as
combinatorial enumerators, in particular of tree inversions. This also results
in a new linear recurrence for those polynomials whose triangular table can be
constructed, row by row, from the initial conditions $ J_{r}^{\left( r\right)
}=1$. The form of this recurrence is also given for the reciprocal polynomials
of $J_{n}^{\left( r\right) }$, known to be the sum enumerators of parking
functions. Explicit formulas for $J_{n}^{\left( r\right) }$ and their
reciprocals are deduced, leading inversely to new representations of these
polynomials as forest statistics. | http://arxiv.org/abs/2302.11221v5 |
Dolbeault, Esteban and Loss [Invent. Math., 2016] obtained an optimal
rigidity result, that is, when $a<0$ and $b_{\mathrm{FS}}(a)\leq b<a+1$ the
extremal function for best constant $\mathcal{S}_{a,b}>0$ of the following
Caffarelli-Kohn-Nirenberg inequality is symmetry, \[
\mathcal{S}_{a,b}\left(\int_{\mathbb{R}^2}|x|^{-qb}|u|^q
\mathrm{d}x\right)^{\frac{2}{q}}
\leq \int_{\mathbb{R}^2}|x|^{-2a}|\nabla u|^2 \mathrm{d}x, \quad \mbox{for
all}\quad u\in C^\infty_0(\mathbb{R}^2), \] where
$b_{\mathrm{FS}}(a):=a-\frac{a}{\sqrt{a^2+1}}$, $q=\frac{2}{b-a}$. An important
task is investigating the stability of extremal functions set $\mathcal{M}$ for
this inequality. Firstly, we classify all solutions of the linearized problem
related to the extremals which fills the work of Felli and Schneider [J. Diff.
Equ., 2003]. When $b_{\mathrm{FS}}(a)< b<a+1$, we investigate the stability of
previous inequality by using spectral estimate combined with a compactness
argument that
\begin{align*}
\int_{\mathbb{R}^2}|x|^{-2a}|\nabla u|^2 \mathrm{d}x
-\mathcal{S}_{a,b}\left(\int_{\mathbb{R}^2}|x|^{-qb}|u|^q
\mathrm{d}x\right)^{\frac{2}{q}}
\geq \mathcal{B}
\mathrm{dist}(u,\mathcal{M})^2,\quad \mbox{for all}\quad u\in
C^\infty_0(\mathbb{R}^2),
\end{align*}
for some $\mathcal{B}>0$, however it is false when $b=b_{\mathrm{FS}}(a)$,
which extends the work of Wei and Wu [Math. Ann., 2022] to $\mathbb{R}^2$.
Furthermore, we obtain the existence of minimizers for $\mathcal{B}$ which
extends the recent work of K\"{o}nig [J. Eur. Math. Soc., to appear]. | http://arxiv.org/abs/2308.04111v2 |
Protein engineering is an emerging field in biotechnology that has the
potential to revolutionize various areas, such as antibody design, drug
discovery, food security, ecology, and more. However, the mutational space
involved is too vast to be handled through experimental means alone. Leveraging
accumulative protein databases, machine learning (ML) models, particularly
those based on natural language processing (NLP), have considerably expedited
protein engineering. Moreover, advances in topological data analysis (TDA) and
artificial intelligence-based protein structure prediction, such as AlphaFold2,
have made more powerful structure-based ML-assisted protein engineering
strategies possible. This review aims to offer a comprehensive, systematic, and
indispensable set of methodological components, including TDA and NLP, for
protein engineering and to facilitate their future development. | http://arxiv.org/abs/2307.14587v1 |
We demonstrate gate-tunable giant field-dependent nonreciprocal transport
(magnetochiral anisotropy) in a noncentrosymmetric superconductor $T_{\rm
d}$-MoTe$_2$ in the thin limit. Giant magnetochiral anisotropy (MCA) with a
rectification coefficient $\gamma$ = $3.1 \times 10^6$ T$^{-1}$ A$^{-1}$, is
observed at 230 mK, below the superconducting transition temperature ($T_c$).
This is one of the largest values reported so far and is likely attributed to
the reduced symmetry of the crystal structure. The temperature dependence of
$\gamma$ indicates that the ratchet-like motion of magnetic vortices is the
origin of the MCA, as supported by our theoretical model. For bilayer $T_{\rm
d}$-MoTe$_2$, we successfully perform gate control of the MCA and realize
threefold modulation of $\gamma$. Our experimental results provide a new route
to realizing electrically controllable superconducting rectification devices in
a single material. | http://arxiv.org/abs/2303.09747v2 |
The transport properties of colloidal particles in active liquids have been
studied extensively. It has led to a deeper understanding of the interactions
between passive and active particles. However, the phase behavior of colloidal
particles in active media has received little attention. Here, we present a
combined experimental and numerical investigation of passive colloids dispersed
in suspensions of active particles. Our study reveals dynamic clustering of
colloids in active media due to an interplay of active noise and an attractive
effective potential between the colloids. The size-ratio of colloidal particles
to the bacteria sets the strength of the interaction. As the relative size of
the colloids increases, the effective potential becomes stronger and the
average size of the clusters grows. The simulations reveal a macroscopic phase
separation of passive colloids at sufficiently large size-ratios. We will
present the role of density fluctuations and hydrodynamic interactions in the
emergence of effective interactions. | http://arxiv.org/abs/2301.11771v1 |
We present a novel method, based on the Saunderson corrections, to predict
the reflectance between a liquid interface and a dielectric diffuser. In this
method, the diffuse properties of the dielectric are characterized using a
single parameter, the multiple-scattering albedo, which is the same
irrespective of being in contact with air or liquid. We tested this method
using an apparatus based on a total integrating sphere capable of measuring
reflectance in both liquid and gas interfaces across various wavelengths of
light. We observed that the difference in the value of the multiple-scattering
albedo between the sphere full of liquid and empty was less than 0.9$\times
10^{-3}$, with the average difference normalized to the respective uncertainty
of only 0.7. These results confirm the reliability of our method and its
potential for use in a wide range of practical applications. | http://arxiv.org/abs/2305.03682v1 |
Data visualization is a powerful tool for exploring and communicating
insights in various domains. To automate visualization choice for datasets, a
task known as visualization recommendation has been proposed. Various
machine-learning-based approaches have been developed for this purpose, but
they often require a large corpus of dataset-visualization pairs for training
and lack natural explanations for their results. To address this research gap,
we propose LLM4Vis, a novel ChatGPT-based prompting approach to perform
visualization recommendation and return human-like explanations using very few
demonstration examples. Our approach involves feature description,
demonstration example selection, explanation generation, demonstration example
construction, and inference steps. To obtain demonstration examples with
high-quality explanations, we propose a new explanation generation
bootstrapping to iteratively refine generated explanations by considering the
previous generation and template-based hint. Evaluations on the VizML dataset
show that LLM4Vis outperforms or performs similarly to supervised learning
models like Random Forest, Decision Tree, and MLP in both few-shot and
zero-shot settings. The qualitative evaluation also shows the effectiveness of
explanations generated by LLM4Vis. We make our code publicly available at
\href{https://github.com/demoleiwang/LLM4Vis}{https://github.com/demoleiwang/LLM4Vis}. | http://arxiv.org/abs/2310.07652v2 |
Accurate intraday forecasts of the power output by PhotoVoltaic (PV) systems
are critical to improve the operation of energy distribution grids. We describe
a neural autoregressive model that aims to perform such intraday forecasts. We
build upon a physical, deterministic PV performance model, the output of which
is used as covariates in the context of the neural model. In addition, our
application data relates to a geographically distributed set of PV systems. We
address all PV sites with a single neural model, which embeds the information
about the PV site in specific covariates. We use a scale-free approach which
relies on the explicit modeling of seasonal effects. Our proposal repurposes a
model initially used in the retail sector and discloses a novel truncated
Gaussian output distribution. An ablation study and a comparison to alternative
architectures from the literature shows that the components in the best
performing proposed model variant work synergistically to reach a skill score
of 15.72% with respect to the physical model, used as a baseline. | http://arxiv.org/abs/2303.08459v3 |
We aimed to build a new and updated C0-C2 chemical network to study the CHON
disequilibrium chemistry of warm and hot exoplanet atmospheres that relies on
extensively validated and recent state-of-the-art combustion networks. The
reliability range of this network was aimed for conditions between 500 - 2500 K
and 100 - 10^-6 bar. We compared the predictions of seven networks over a large
set of experiments, covering a wide range of conditions (pressures,
temperatures, and initial compositions). To examine the consequences of this
new chemical network on exoplanets atmospheric studies, we generated abundances
profiles for GJ 436 b, GJ 1214 b, HD 189733 b, and HD 209458 b, using the 1D
kinetic model FRECKLL and calculated the corresponding transmission spectra
using TauREx 3.1. These spectra and abundance profiles have been compared with
results obtained with our previous chemical network. Our new kinetic network is
composed of 174 species and 1293 reactions mostly reversible. This network
proves to be more accurate than our previous one for the tested experimental
conditions. The nitrogen chemistry update is found to be impactful on the
abundance profiles, particularly for HCN, with differences up to four orders of
magnitude. The CO2 profiles are also significantly affected, with important
repercussions on the transmission spectrum of GJ 436 b. These effects highlight
the importance of using extensively validated chemical networks to gain
confidence in our models predictions. As shown with CH2NH, the coupling between
carbon and nitrogen chemistry combined with radicals produced by photolysis can
have huge effects impacting the transmission spectra. | http://arxiv.org/abs/2310.08561v1 |
Probabilistic graphical models have become an important unsupervised learning
tool for detecting network structures for a variety of problems, including the
estimation of functional neuronal connectivity from two-photon calcium imaging
data. However, in the context of calcium imaging, technological limitations
only allow for partially overlapping layers of neurons in a brain region of
interest to be jointly recorded. In this case, graph estimation for the full
data requires inference for edge selection when many pairs of neurons have no
simultaneous observations. This leads to the Graph Quilting problem, which
seeks to estimate a graph in the presence of block-missingness in the empirical
covariance matrix. Solutions for the Graph Quilting problem have previously
been studied for Gaussian graphical models; however, neural activity data from
calcium imaging are often non-Gaussian, thereby requiring a more flexible
modeling approach. Thus, in our work, we study two approaches for nonparanormal
Graph Quilting based on the Gaussian copula graphical model, namely a maximum
likelihood procedure and a low-rank based framework. We provide theoretical
guarantees on edge recovery for the former approach under similar conditions to
those previously developed for the Gaussian setting, and we investigate the
empirical performance of both methods using simulations as well as real data
calcium imaging data. Our approaches yield more scientifically meaningful
functional connectivity estimates compared to existing Gaussian graph quilting
methods for this calcium imaging data set. | http://arxiv.org/abs/2305.13491v1 |
This paper studies the controllability backbone problem in dynamical networks
defined over graphs. The main idea of the controllability backbone is to
identify a small subset of edges in a given network such that any subnetwork
containing those edges/links has at least the same network controllability as
the original network while assuming the same set of input/leader vertices. We
consider the strong structural controllability (SSC) in our work, which is
useful but computationally challenging. Thus, we utilize two lower bounds on
the network's SSC based on the zero forcing notion and graph distances. We
provide algorithms to compute controllability backbones while preserving these
lower bounds. We thoroughly analyze the proposed algorithms and compute the
number of edges in the controllability backbones. Finally, we compare and
numerically evaluate our methods on random graphs. | http://arxiv.org/abs/2309.02649v1 |
Context. Apertif is a multi-beam receiver system for the Westerbork Synthesis
Radio Telescope that operates at 1.1-1.5 GHz, which overlaps with various radio
services, resulting in contamination of astronomical signals with
radio-frequency interference (RFI). Aims. We analyze approaches to mitigate
Apertif interference and design an automated detection procedure for its
imaging mode. Using this approach, we present long-term RFI detection results
of over 300 Apertif observations. Methods. Our approach is based on the
AOFlagger detection approach. We introduce several new features, including ways
to deal with ranges of invalid data (e.g. caused by shadowing) in both the
SumThreshold and scale-invariant rank operator steps; pre-calibration bandpass
calibration; auto-correlation flagging; and HI flagging avoidance. These
methods are implemented in a new framework that uses the Lua language for
scripting, which is new in AOFlagger version 3. Results. Our approach removes
RFI fully automatically, and is robust and effective enough for further
calibration and (continuum) imaging of these data. Analysis of 304 observations
show an average of 11.1% of lost data due to RFI with a large spread. We
observe 14.6% RFI in auto-correlations. Computationally, AOFlagger achieves a
throughput of 370 MB/s on a single computing node. Compared to published
machine learning results, the method is one to two orders of magnitude faster. | http://arxiv.org/abs/2301.01562v1 |
For the efficient simulation of open quantum systems we often use quantum
jump trajectories given by pure states that evolve stochastically to unravel
the dynamics of the underlying master equation. In the Markovian regime, when
the dynamics is described by a Gorini-Kossakowski-Sudarshan-Lindblad (GKSL)
master equation, this procedure is known as Monte-Carlo wavefunction (MCWF)
approach . However, beyond ultraweak system-bath coupling, the dynamics of the
system is not described by an equation of GKSL type, but rather by the Redfield
equation, which can be brought into pseudo-Lindblad form. Here negative
dissipation strengths prohibit the conventional approach. To overcome this
problem, we propose a pseudo-Lindblad quantum trajectory (PLQT) unraveling. It
does not require an effective extension of the state space, like other
approaches, except for the addition of a single classical bit. We test the PLQT
for the eternal non-Markovian master equation for a single qubit and an
interacting Fermi Hubbard chain coupled to a thermal bath and discuss its
computational effort compared to solving the full master equation. | http://arxiv.org/abs/2306.14876v3 |
In this paper, we study the large deviation principle (LDP) for obstacle
problems governed by a T-monotone operator and small multiplicative stochastic
reaction. Our approach relies on a combination of new sufficient condition to
prove LDP by Matoussi, Sabbagh and Zhang [Appl. Math. Optim. 2021] and
Lewy-Stampacchia inequalities to manage the Lagrange-multiplier associated with
the obstacle. | http://arxiv.org/abs/2308.02206v2 |
In this paper we consider a one dimensional elastic system with double
porosity structure and with frictional damping in both porous equations. We
introduce two stability numbers $\chi_{0}$ and $\chi_{1}$ and prove that the
solution of the system decays exponentially provided that $\chi_{0}=0$ and
$\chi_{1}\neq0.$ Otherwise, we prove the lack of exponential decay. Our results
improve the results of \cite{Bazarra} and \cite{Nemsi}. | http://arxiv.org/abs/2307.12690v1 |
Unsupervised domain adaptation is a type of domain adaptation and exploits
labeled data from the source domain and unlabeled data from the target one. In
the Cross-Modality Domain Adaptation for Medical Image Segmenta-tion challenge
(crossMoDA2022), contrast enhanced T1 MRI volumes for brain are provided as the
source domain data, and high-resolution T2 MRI volumes are provided as the
target domain data. The crossMoDA2022 challenge contains two tasks,
segmentation of vestibular schwannoma (VS) and cochlea, and clas-sification of
VS with Koos grade. In this report, we presented our solution for the
crossMoDA2022 challenge. We employ an image-to-image translation method for
unsupervised domain adaptation and residual U-Net the segmenta-tion task. We
use SVM for the classification task. The experimental results show that the
mean DSC and ASSD are 0.614 and 2.936 for the segmentation task and MA-MAE is
0.84 for the classification task. | http://arxiv.org/abs/2302.08016v1 |
Attosecond pulses created by high-order harmonic generation in gases often
exhibit strong chromatic aberrations, arising from the broad bandwidth and
wavelength-dependent nonlinear light-matter interaction. When the driving laser
intensity varies spatially, as for Gaussian driving beams, the apparent source
position of the harmonics differs significantly from one order to the next,
thus affecting the achievable intensity and duration of the attosecond pulses
when they are focused on a target. We show that these chromatic aberrations can
be reduced by spatially shaping the fundamental beam to generate high-order
harmonics with a driver having a flat-top profile inside the gas medium. By
measuring both the intensity profile and wavefront for each harmonic in a
plane, we access the extreme ultra-violet (XUV) beam properties and investigate
these properties near focus. We observe that controlling chromatic aberrations
by flat-top spatial shaping strongly reduces the variation of the XUV spectrum
on the beam axis during propagation and, in return, the longitudinal
sensitivity of both the temporal profiles and the temporal shifts of the
focused attosecond pulses. | http://arxiv.org/abs/2301.11017v1 |
The ability of convolutional neural networks (CNNs) to recognize objects
regardless of their position in the image is due to the
translation-equivariance of the convolutional operation. Group-equivariant CNNs
transfer this equivariance to other transformations of the input. Dealing
appropriately with objects and object parts of different scale is challenging,
and scale can vary for multiple reasons such as the underlying object size or
the resolution of the imaging modality. In this paper, we propose a
scale-equivariant convolutional network layer for three-dimensional data that
guarantees scale-equivariance in 3D CNNs. Scale-equivariance lifts the burden
of having to learn each possible scale separately, allowing the neural network
to focus on higher-level learning goals, which leads to better results and
better data-efficiency. We provide an overview of the theoretical foundations
and scientific work on scale-equivariant neural networks in the two-dimensional
domain. We then transfer the concepts from 2D to the three-dimensional space
and create a scale-equivariant convolutional layer for 3D data. Using the
proposed scale-equivariant layer, we create a scale-equivariant U-Net for
medical image segmentation and compare it with a non-scale-equivariant baseline
method. Our experiments demonstrate the effectiveness of the proposed method in
achieving scale-equivariance for 3D medical image analysis. We publish our code
at https://github.com/wimmerth/scale-equivariant-3d-convnet for further
research and application. | http://arxiv.org/abs/2304.05864v1 |
This thesis covers a range of experimental and theoretical efforts to
elucidate the origin of the $4.8\sigma$ MiniBooNE low energy excess (LEE). We
begin with the follow-up MicroBooNE experiment, which took data along the BNB
from 2016 to 2021. This thesis specifically presents MicroBooNE's search for
$\nu_e$ charged-current quasi-elastic (CCQE) interactions consistent with
two-body scattering. The two-body CCQE analysis uses a novel reconstruction
process, including a number of deep-learning-based algorithms, to isolate a
sample of $\nu_e$ CCQE interaction candidates with $75\%$ purity. The analysis
rules out an entirely $\nu_e$-based explanation of the MiniBooNE excess at the
$2.4\sigma$ confidence level. We next perform a combined fit of MicroBooNE and
MiniBooNE data to the popular $3+1$ model; even after the MicroBooNE results,
allowed regions in $\Delta m^2$-$\sin^2 2_{\theta_{\mu e}}$ parameter space
exist at the $3\sigma$ confidence level. This thesis also demonstrates that the
MicroBooNE data are consistent with a $\overline{\nu}_e$-based explanation of
the MiniBooNE LEE at the $<2\sigma$ confidence level. Next, we investigate a
phenomenological explanation of the MiniBooNE excess combining the $3+1$ model
with a dipole-coupled heavy neutral lepton (HNL). It is shown that a 500 MeV
HNL can accommodate the energy and angular distributions of the LEE at the
$2\sigma$ confidence level while avoiding stringent constraints derived from
MINER$\nu$A elastic scattering data. Finally, we discuss the Coherent
CAPTAIN-Mills experiment--a 10-ton light-based liquid argon detector at Los
Alamos National Laboratory. The background rejection achieved from a novel
Cherenkov-based reconstruction algorithm will enable world-leading sensitivity
to a number of beyond-the-Standard Model physics scenarios, including
dipole-coupled HNLs. | http://arxiv.org/abs/2308.12015v1 |
Applying very small purely radial strains on amorphous solids in radial
geometry one observes elastic responses that break the radial symmetry. Without
any plasticity involved, the responses indicate nonlinear mode coupling
contributions even for minute strains. We show that these symmetry-breaking
responses are due to disorder, typical to amorphous configurations. The
symmetry breaking responses are quantitatively explained using the classical
Michell solutions which are excited by mode coupling. | http://arxiv.org/abs/2301.08546v1 |
We extract the Hubble law by the frequency-shift considerations of test
particles revolving the Kerr black hole in asymptotically de Sitter spacetime.
To this end, we take into account massive geodesic particles circularly
orbiting the Kerr-de Sitter black holes that emit redshifted photons towards a
distant observer which is moving away from the emitter-black hole system. By
considering this configuration, we obtain an expression for redshift in terms
of the spacetime parameters, such as mass, angular momentum, and the
cosmological constant. Then, we find the frequency shift of photons versus the
Hubble constant with the help of some physically motivated approximations.
Finally, some exact formulas for the Schwarzschild black hole mass and the
Hubble constant in terms of the observational redshift of massive bodies
circularly orbiting this black hole are extracted. Our results suggest a new
independent general relativistic approach to obtaining the late-time Hubble
constant in terms of observable quantities. | http://arxiv.org/abs/2302.11547v2 |
As one of the closest supernovae (SNe) in the last decade, SN 2023ixf is an
unprecedented target to investigate the progenitor star that exploded. However,
there is still significant uncertainty in the reported progenitor properties.
In this work, we present a detailed study of the progenitor of SN 2023ixf with
two independent analyses. We first modelled its spectral energy distribution
(SED) based on Hubble Space Telescope optical, Spitzer mid-infrared (IR), and
ground-based near-IR data. We find that stellar pulsation and circumstellar
extinction have great impacts on SED fitting, and the result suggests a
relatively massive red supergiant (RSG) surrounded by C-rich dust with an
initial mass of 16.2--17.4 Msun. The corresponding rate of mass-loss occurring
at least 3 years before the SN explosion is about $2 \times 10^{-4}
M_\odot$yr$^{-1}$. We also derived the star formation history of the SN
environment based on resolved stellar populations, and the most recent
star-forming epoch corresponds to a progenitor initial mass of 17--19 Msun, in
agreement with that from our SED fitting. Therefore, we conclude that the
progenitor of SN 2023ixf is close to the high-mass end for Type II SN
progenitors. | http://arxiv.org/abs/2308.04677v2 |
Crosslingual conditional generation (e.g., machine translation) has long
enjoyed the benefits of scaling. Nonetheless, there are still issues that scale
alone may not overcome. A source query in one language, for instance, may yield
several translation options in another language without any extra context. Only
one translation could be acceptable however, depending on the translator's
preferences and goals. Choosing the incorrect option might significantly affect
translation usefulness and quality. We propose a novel method interactive-chain
prompting -- a series of question, answering and generation intermediate steps
between a Translator model and a User model -- that reduces translations into a
list of subproblems addressing ambiguities and then resolving such subproblems
before producing the final text to be translated. To check ambiguity resolution
capabilities and evaluate translation quality, we create a dataset exhibiting
different linguistic phenomena which leads to ambiguities at inference for four
languages. To encourage further exploration in this direction, we release all
datasets. We note that interactive-chain prompting, using eight interactions as
exemplars, consistently surpasses prompt-based methods with direct access to
background information to resolve ambiguities. | http://arxiv.org/abs/2301.10309v1 |
In astronomy, there is an opportunity to enhance the practice of validating
models through statistical techniques, specifically to account for measurement
error uncertainties. While models are commonly used to describe observations,
there are instances where there is a lack of agreement between the two. This
can occur when models are derived from incomplete theories, when a
better-fitting model is not available or when measurement uncertainties are not
correctly considered. However, with the application of specific tests that
assess the consistency between observations and astrophysical models in a
model-independent way, it is possible to address this issue. The consistency
tests (ConTESTs) developed in this paper use a combination of non-parametric
methods and distance measures to obtain a test statistic that evaluates the
closeness of the astrophysical model to the observations. To draw conclusions
on the consistency hypothesis, a simulation-based methodology is performed. In
particular, we built two tests for density models and two for regression models
to be used depending on the case at hand and the power of the test needed. We
used ConTEST to examine synthetic examples in order to determine the
effectiveness of the tests and provide guidance on using them while building a
model. We also applied ConTEST to various astronomy cases, identifying which
models were consistent and, if not, identifying the probable causes of
rejection. | http://arxiv.org/abs/2302.09308v1 |
In this work we study the acyclic orientations of complete multipartite
graphs. We obtain an encoding of the acyclic orientations of the complete
$p$-partite graph with size of its parts $n:=n_1,n_2,\ldots,n_p$ via a vector
with $p$ symbols and length $n_1+n_2+\ldots+n_p$ when the parts are fixed but
not the vertices in each part. We also give a recursive way to construct all
acyclic orientations of a complete multipartite graph, this construction can be
done by computer easily in order $\mathcal{O}(n)$. Besides, obtained
codification of the acyclic orientations allows us to count the number of
non-isomorphic acyclic orientations of the complete multipartite graphs.
Furthermore, we obtain a closed formula for non-isomorphic acyclic orientations
of the complete multipartite graphs with a directed spanning tree. In addition,
we obtain a closed formula for the ordinary generating functions for the number
of strings in the alphabet $\{s_1,s_2,\ldots,s_p\}$ with $k_1$ characters
$s_1$, $k_2$ characters $s_2$, and so on with $k_p$ characters $s_p$ such that
no two consecutive characters are the same. Finally, we obtain a closed formula
for the number of acyclic orientation of a complete multipartite graph
$K_{n_1,\ldots,n_p}$ with labelled vertices. | http://arxiv.org/abs/2303.09021v1 |
We consider a self-consistent axially symmetric system supported by a
classical nonlinear spinor field minimally coupled to electric and magnetic
Maxwell fields. The presence of the nonlinearity of the spinor field ensures
the existence of a minimum positive energy of the system (a mass gap), of a
minimum charge (a charge gap), and of a minimum magnetic moment. In turn, the
presence of the electric charge results in qualitative changes in the behavior
of physical characteristics of the systems under consideration as compared with
the case of an electrically neutral spinor field. It is shown that, with a
suitable choice of free system parameters, there exists a regular finite-energy
particlelike solution describing a localized spinning object whose physical
parameters correspond to the main characteristics of an electron/positron
(including the spin equal to $1/2$), but with the characteristic size
comparable to the corresponding Compton wavelength. Also, we show that four
local Dirac equations are equivalent to two nonlocal equations. | http://arxiv.org/abs/2310.00883v1 |
A likely source of a gravitational-wave background (GWB) in the frequency
band of the Advanced LIGO, Virgo and KAGRA detectors is the superposition of
signals from the population of unresolvable stellar-mass binary-black-hole
(BBH) mergers throughout the Universe. Since the duration of a BBH merger in
band ($\sim\!1~{\rm s}$) is much shorter than the expected separation between
neighboring mergers ($\sim\!10^3~{\rm s}$), the observed signal will be
"popcorn-like" or intermittent with duty cycles of order $10^{-3}$. However,
the standard cross-correlation search for stochastic GWBs currently performed
by the LIGO-Virgo-KAGRA collaboration is based on a continuous-Gaussian signal
model, which does not take into account the intermittent nature of the
background. The latter is better described by a Gaussian mixture-model, which
includes a duty cycle parameter that quantifies the degree of intermittence.
Building on an earlier paper by Drasco and Flanagan, we propose a
stochastic-signal-based search for intermittent GWBs. For such signals, this
search performs better than the standard continuous cross-correlation search.
We present results of our stochastic-signal-based approach for intermittent
GWBs applied to simulated data for some simple models, and compare its
performance to the other search methods, both in terms of detection and signal
characterization. Additional testing on more realistic simulated data sets,
e.g., consisting of astrophysically-motivated BBH merger signals injected into
colored detector noise containing noise transients, will be needed before this
method can be applied with confidence on real gravitational-wave data. | http://arxiv.org/abs/2301.07675v1 |
Surveillance systems have emerged as crucial elements in upholding peace and
security in the modern world. Their ubiquity aids in monitoring suspicious
activities effectively. However, in densely populated environments, continuous
active monitoring becomes impractical, necessitating the development of
intelligent surveillance systems. AI integration in the surveillance domain was
a big revolution, however, speed issues have prevented its widespread
implementation in the field. It has been observed that quantum artificial
intelligence has led to a great breakthrough. Quantum artificial
intelligence-based surveillance systems have shown to be more accurate as well
as capable of performing well in real-time scenarios, which had never been seen
before. In this research, a RentinaNet model is integrated with Quantum CNN and
termed as Quantum-RetinaNet. By harnessing the Quantum capabilities of QCNN,
Quantum-RetinaNet strikes a balance between accuracy and speed. This innovative
integration positions it as a game-changer, addressing the challenges of active
monitoring in densely populated scenarios. As demand for efficient surveillance
solutions continues to grow, Quantum-RetinaNet offers a compelling alternative
to existing CNN models, upholding accuracy standards without sacrificing
real-time performance. The unique attributes of Quantum-RetinaNet have
far-reaching implications for the future of intelligent surveillance. With its
enhanced processing speed, it is poised to revolutionize the field, catering to
the pressing need for rapid yet precise monitoring. As Quantum-RetinaNet
becomes the new standard, it ensures public safety and security while pushing
the boundaries of AI in surveillance. | http://arxiv.org/abs/2309.03231v1 |
We present a full-wave Maxwell-density matrix simulation tool including
c-number stochastic noise terms for the modeling of the spatiotemporal dynamics
in active photonic devices, such as quantum cascade lasers (QCLs) and quantum
dot (QD) structures. The coherent light-matter interaction in such devices
plays an important role in the generation of frequency combs and other
nonlinear and nonclassical optical phenomena. Since the emergence of nonlinear
and nonclassical features is directly linked to the noise properties, detailed
simulations of the noise characteristics are required for the development of
low-noise quantum optoelectronic sources. Our semiclassical simulation
framework is based on the Lindblad equation for the electron dynamics, coupled
with Maxwell's equations for the optical propagation in the laser waveguide.
Fluctuations arising from interactions of the optical field and quantum system
with their reservoirs are treated within the quantum Langevin theory. Here, the
fluctuations are included by adding stochastic c-number terms to the
Maxwell-density matrix equations. The implementation in the mbsolve dynamic
simulation framework is publicly available. | http://arxiv.org/abs/2310.16039v2 |
We propose a novel robust Model Predictive Control (MPC) scheme for nonlinear
multi-input multi-output systems of relative degree one with stable internal
dynamics. The proposed algorithm is a combination of funnel MPC, i.e., MPC with
a particular stage cost, and the model-free adaptive funnel controller. The new
robust funnel MPC scheme guarantees output tracking of reference signals within
prescribed performance bounds -- even in the presence of unknown disturbances
and a structural model-plant mismatch. We show initial and recursive
feasibility of the proposed control scheme without imposing terminal conditions
or any requirements on the prediction horizon. Moreover, we allow for model
updates at runtime. To this end, we propose a proper initialization strategy,
which ensures that recursive feasibility is preserved. Finally, we validate the
performance of the proposed robust MPC scheme by simulations. | http://arxiv.org/abs/2302.01754v2 |
Speech representation learning with self-supervised algorithms has resulted
in notable performance boosts in many downstream tasks. Recent work combined
self-supervised learning (SSL) and visually grounded speech (VGS) processing
mechanisms for representation learning. The joint training with SSL and VGS
mechanisms provides the opportunity to utilize both unlabeled speech and
speech-related visual information based on data availability. This has shown to
enhance the quality of learned representations, especially at encoding
semantic- and lexical-level knowledge. In this work, we further study the joint
optimization of wav2vec 2.0-based SSL and transformer-based VGS as a multi-task
learning system. We explore a set of training scenarios to understand how
speech representations are shared or transferred between the two tasks, and
what is the optimal training strategy for cross-modal semantic retrieval and
phoneme discrimination performance. As a result, we find that sequential
training with wav2vec 2.0 first and VGS next provides higher performance on
audio-visual retrieval compared to simultaneous optimization of both learning
mechanisms. However, the parallel SSL-VGS training reduces the effects of
catastrophic forgetting when switching between optimization criteria. Moreover,
the results suggest that phonemic representations learned through the VGS
mechanism may generalize better across datasets compared to those learned with
SSL. | http://arxiv.org/abs/2306.02972v1 |
We benchmark the performances of Qrack, an open-source software library for
the high-performance classical simulation of (gate-model) quantum computers.
Qrack simulates, in the Schr\"odinger picture, the exact quantum state of $n$
qubits evolving under the application of a circuit composed of elementary
quantum gates. Moreover, Qrack can also run approximate simulations in which a
tunable reduction of the quantum state fidelity is traded for a significant
reduction of the execution time and memory footprint. In this work, we give an
overview of both simulation methods (exact and approximate), highlighting the
main physics-based and software-based techniques. Moreover, we run
computationally heavy benchmarks on a single GPU, executing large quantum
Fourier transform circuits and large random circuits. Compared with other
classical simulators, we report competitive execution times for the exact
simulation of Fourier transform circuits with up to 27 qubits. We also
demonstrate the approximate simulation of all amplitudes of random circuits
acting on 54 qubits with 7 layers at average fidelity higher than $4\%$, a task
commonly considered hard without super-computing resources. | http://arxiv.org/abs/2304.14969v2 |
In this study, we present an integro-differential model to simulate the local
spread of infections. The model incorporates a standard
susceptible-infected-recovered (\textit{SIR}-) model enhanced by an integral
kernel, allowing for non-homogeneous mixing between susceptibles and
infectives. We define requirements for the kernel function and derive
analytical results for both the \textit{SIR}- and a reduced
susceptible-infected-susceptible (\textit{SIS}-) model, especially the
uniqueness of solutions.
In order to optimize the balance between disease containment and the social
and political costs associated with lockdown measures, we set up requirements
for the implementation of control function, and show examples for three
different formulations for the control: continuous and time-dependent,
continuous and space- and time-dependent, and piecewise constant space- and
time-dependent. Latter represent reality more closely as the control cannot be
updated for every time and location. We found the optimal control values for
all of those setups, which are by nature best for a continuous and space-and
time dependent control, yet found reasonable results for the discrete setting
as well.
To validate the numerical results of the integro-differential model, we
compare them to an established agent-based model that incorporates social and
other microscopical factors more accurately and thus acts as a benchmark for
the validity of the integro-differential approach. A close match between the
results of both models validates the integro-differential model as an efficient
macroscopic proxy. Since computing an optimal control strategy for agent-based
models is computationally very expensive, yet comparatively cheap for the
integro-differential model, using the proxy model might have interesting
implications for future research. | http://arxiv.org/abs/2307.10087v1 |
The dark ages 21-cm signal is a powerful tool for precision cosmology and
probing new physics. We study two non-standard models: an excess radio
background (ERB) model (possibly generated by dark matter decay) and the
millicharged dark matter (mDM) model. These models were inspired by the
possible EDGES detection of a strong global 21-cm absorption during cosmic
dawn, but more generally they provide a way to anticipate the potential
discovery space. During the dark ages the 21-cm global signal in the ERB model
reaches a saturated form for an amplitude $A_{\rm r}=0.4$, where $A_{\rm r}$ is
the radio background intensity at cosmic dawn relative to the cosmic microwave
background. This amplitude is one-fifth of the minimum required to explain the
EDGES signal, and corresponds to just 0.1% of the observed extragalactic
background; it would give a signal that can be detected at 5.9$\sigma$
significance (compared to $4.1\,\sigma$ for the standard signal) and can be
distinguished from the standard (no ERB) signal at $8.5\,\sigma$, all with a
1,000 hr global signal measurement. The 21-cm power spectrum has potentially
more information, but far greater resources would be required for comparable
constraints. For the mDM model, over a range of viable parameters, the global
signal detection significance would be $4.7-7.2\,\sigma$, and it could be
distinguished from the standard at $2.2-9.3\,\sigma$. With an array of global
signal antennas achieving an effective 100,000 hr integration, the significance
would be $10\,\times$ better. Our analysis helps motivate the development of
lunar and space-based dark ages experiments. | http://arxiv.org/abs/2310.15530v2 |
The proliferation of the Internet of Things (IoT) has raised concerns about
the security of connected devices. There is a need to develop suitable and
cost-efficient methods to identify vulnerabilities in IoT devices in order to
address them before attackers seize opportunities to compromise them. The
deception technique is a prominent approach to improving the security posture
of IoT systems. Honeypot is a popular deception technique that mimics
interaction in real fashion and encourages unauthorised users (attackers) to
launch attacks. Due to the large number and the heterogeneity of IoT devices,
manually crafting the low and high-interaction honeypots is not affordable.
This has forced researchers to seek innovative ways to build honeypots for IoT
devices. In this paper, we propose a honeypot for IoT devices that uses machine
learning techniques to learn and interact with attackers automatically. The
evaluation of the proposed model indicates that our system can improve the
session length with attackers and capture more attacks on the IoT network. | http://arxiv.org/abs/2303.12367v1 |
Offline pretraining with a static dataset followed by online fine-tuning
(offline-to-online, or OtO) is a paradigm well matched to a real-world RL
deployment process. In this scenario, we aim to find the best-performing policy
within a limited budget of online interactions. Previous work in the OtO
setting has focused on correcting for bias introduced by the policy-constraint
mechanisms of offline RL algorithms. Such constraints keep the learned policy
close to the behavior policy that collected the dataset, but we show this can
unnecessarily limit policy performance if the behavior policy is far from
optimal. Instead, we forgo constraints and frame OtO RL as an exploration
problem that aims to maximize the benefit of online data-collection. We first
study the major online RL exploration methods based on intrinsic rewards and
UCB in the OtO setting, showing that intrinsic rewards add training instability
through reward-function modification, and UCB methods are myopic and it is
unclear which learned-component's ensemble to use for action selection. We then
introduce an algorithm for planning to go out-of-distribution (PTGOOD) that
avoids these issues. PTGOOD uses a non-myopic planning procedure that targets
exploration in relatively high-reward regions of the state-action space
unlikely to be visited by the behavior policy. By leveraging concepts from the
Conditional Entropy Bottleneck, PTGOOD encourages data collected online to
provide new information relevant to improving the final deployment policy
without altering rewards. We show empirically in several continuous control
tasks that PTGOOD significantly improves agent returns during online
fine-tuning and avoids the suboptimal policy convergence that many of our
baselines exhibit in several environments. | http://arxiv.org/abs/2310.05723v3 |
This paper will present a multi-fidelity, data-adaptive approach with a Long
Short-Term Memory (LSTM) neural network to estimate ship response statistics in
bimodal, bidirectional seas. The study will employ a fast low-fidelity,
volume-based tool SimpleCode and a higher-fidelity tool known as the Large
Amplitude Motion Program (LAMP). SimpleCode and LAMP data were generated by
common bi-modal, bi-directional sea conditions in the North Atlantic as
training data. After training an LSTM network with LAMP ship motion response
data, a sample route was traversed and randomly sampled historical weather was
input into SimpleCode and the LSTM network, and compared against the higher
fidelity results. | http://arxiv.org/abs/2307.08810v1 |
We show how to "compile" human-readable programs into standard decoder-only
transformer models. Our compiler, Tracr, generates models with known structure.
This structure can be used to design experiments. For example, we use it to
study "superposition" in transformers that execute multi-step algorithms.
Additionally, the known structure of Tracr-compiled models can serve as
ground-truth for evaluating interpretability methods. Commonly, because the
"programs" learned by transformers are unknown it is unclear whether an
interpretation succeeded. We demonstrate our approach by implementing and
examining programs including computing token frequencies, sorting, and
parenthesis checking. We provide an open-source implementation of Tracr at
https://github.com/google-deepmind/tracr. | http://arxiv.org/abs/2301.05062v5 |
Contrastive self-supervised learning has gained attention for its ability to
create high-quality representations from large unlabelled data sets. A key
reason that these powerful features enable data-efficient learning of
downstream tasks is that they provide augmentation invariance, which is often a
useful inductive bias. However, the amount and type of invariances preferred is
not known apriori, and varies across different downstream tasks. We therefore
propose a multi-task self-supervised framework (MT-SLVR) that learns both
variant and invariant features in a parameter-efficient manner. Our multi-task
representation provides a strong and flexible feature that benefits diverse
downstream tasks. We evaluate our approach on few-shot classification tasks
drawn from a variety of audio domains and demonstrate improved classification
performance on all of them | http://arxiv.org/abs/2305.17191v2 |
We discuss the problem of bounding partially identifiable queries, such as
counterfactuals, in Pearlian structural causal models. A recently proposed
iterated EM scheme yields an inner approximation of those bounds by sampling
the initialisation parameters. Such a method requires multiple (Bayesian
network) queries over models sharing the same structural equations and
topology, but different exogenous probabilities. This setup makes a compilation
of the underlying model to an arithmetic circuit advantageous, thus inducing a
sizeable inferential speed-up. We show how a single symbolic knowledge
compilation allows us to obtain the circuit structure with symbolic parameters
to be replaced by their actual values when computing the different queries. We
also discuss parallelisation techniques to further speed up the bound
computation. Experiments against standard Bayesian network inference show clear
computational advantages with up to an order of magnitude of speed-up. | http://arxiv.org/abs/2310.03352v1 |
With the significant advancements in artificial intelligence (AI)
technologies and powerful computational capabilities, generative AI (GAI) has
become a pivotal digital content generation technique for offering superior
digital services. However, directing GAI towards desired outputs still suffer
the inherent instability of the AI model. In this paper, we design a novel
framework that utilizes wireless perception to guide GAI (WiPe-GAI) for
providing digital content generation service, i.e., AI-generated content
(AIGC), in resource-constrained mobile edge networks. Specifically, we first
propose a new sequential multi-scale perception (SMSP) algorithm to predict
user skeleton based on the channel state information (CSI) extracted from
wireless signals. This prediction then guides GAI to provide users with AIGC,
such as virtual character generation. To ensure the efficient operation of the
proposed framework in resource constrained networks, we further design a
pricing-based incentive mechanism and introduce a diffusion model based
approach to generate an optimal pricing strategy for the service provisioning.
The strategy maximizes the user's utility while enhancing the participation of
the virtual service provider (VSP) in AIGC provision. The experimental results
demonstrate the effectiveness of the designed framework in terms of skeleton
prediction and optimal pricing strategy generation comparing with other
existing solutions. | http://arxiv.org/abs/2309.01426v1 |
Pioneered by Benczur and Karger for cuts in graphs [STOC'96], sparsification
is a fundamental topic with wide-ranging applications that has been studied,
e.g., for graphs and hypergraphs, in a combinatorial and a spectral setting,
and with additive and multiplicate error bounds. Rafiey and Yoshida recently
considered sparsification of decomposable submodular functions [AAAI'22]. We
extend their work by presenting an efficient algorithm for a sparsifier for
monotone $k$-submodular functions of low curvature. | http://arxiv.org/abs/2302.03143v1 |
The parameter identifiability problem for a dynamical system is to determine
whether the parameters of the system can be found from data for the outputs of
the system. Verifying whether the parameters are identifiable is a necessary
first step before a meaningful parameter estimation can take place.
Non-identifiability occurs in practical models. To reparametrize a model to
achieve identifiability is a challenge. The existing approaches have been shown
to be useful for many important examples. However, these approaches are either
limited to linear models and scaling parametrizations or are not guaranteed to
find a reparametrization even if it exists. In the present paper, we prove that
there always exists a locally identifiable model with the same input-output
behaviour as the original one obtained from a given one by a partial
specialization of the parameters. As an extra feature of our approach, the
resulting (at least) locally identifiable reparameterization has the same
shape: the monomials in the new state variables in the new model are formed in
the same way as in the original model. Furthermore, we give a sufficient
observability condition for the existence of a state space transformation from
the original model to the new one. Our proof is constructive and can be
translated to an algorithm, which we illustrate by several examples. | http://arxiv.org/abs/2308.16273v2 |
The problem of imaging materials with circular polarization properties is
discussed within the framework of vectorial ptychography. We demonstrate, both
theoretically and numerically, that using linear polarizations to investigate
such materials compromises the unicity of the solution provided by this
computational method. To overcome this limitation, an improved measurement
approach is proposed, which involves specific combinations of elliptical
polarizations. The effectiveness of this strategy is demonstrated by numerical
simulations and experimental measurements on cholesteric liquid crystals films,
which possess unique polarization properties. With the help of Pauli matrices
algebra, our results highlight the technique's ability to discern between
different types of circular polarizers, uniform vs. non-uniform, and determine
their handedness. | http://arxiv.org/abs/2310.02058v1 |
We aim to understand how people assess human likeness in navigation produced
by people and artificially intelligent (AI) agents in a video game. To this
end, we propose a novel AI agent with the goal of generating more human-like
behavior. We collect hundreds of crowd-sourced assessments comparing the
human-likeness of navigation behavior generated by our agent and baseline AI
agents with human-generated behavior. Our proposed agent passes a Turing Test,
while the baseline agents do not. By passing a Turing Test, we mean that human
judges could not quantitatively distinguish between videos of a person and an
AI agent navigating. To understand what people believe constitutes human-like
navigation, we extensively analyze the justifications of these assessments.
This work provides insights into the characteristics that people consider
human-like in the context of goal-directed video game navigation, which is a
key step for further improving human interactions with AI agents. | http://arxiv.org/abs/2303.02160v1 |
When building a new application we are increasingly confronted with the need
of reusing and integrating pre-existing knowledge. Nevertheless, it is a fact
that this prior knowledge is virtually impossible to reuse as-is. This is true
also in domains, e.g., eHealth, where a lot of effort has been put into
developing high-quality standards and reference ontologies, e.g. FHIR1. In this
paper, we propose an integrated methodology, called iTelos, which enables data
and knowledge reuse towards the construction of Interoperable Electronic Health
Records (iEHR). The key intuition is that the data level and the schema level
of an application should be developed independently, thus allowing for maximum
flexibility in the reuse of the prior knowledge, but under the overall guidance
of the needs to be satisfied, formalized as competence queries. This intuition
is implemented by codifying all the requirements, including those concerning
reuse, as part of a purpose defined a priori, which is then used to drive a
middle-out development process where the application schema and data are
continuously aligned. The proposed methodology is validated through its
application to a large-scale case study. | http://arxiv.org/abs/2305.06088v1 |
Gromov-Wasserstein distance has found many applications in machine learning
due to its ability to compare measures across metric spaces and its invariance
to isometric transformations. However, in certain applications, this invariance
property can be too flexible, thus undesirable. Moreover, the
Gromov-Wasserstein distance solely considers pairwise sample similarities in
input datasets, disregarding the raw feature representations. We propose a new
optimal transport-based distance, called Augmented Gromov-Wasserstein, that
allows for some control over the level of rigidity to transformations. It also
incorporates feature alignments, enabling us to better leverage prior knowledge
on the input data for improved performance. We present theoretical insights
into the proposed metric. We then demonstrate its usefulness for single-cell
multi-omic alignment tasks and a transfer learning scenario in machine
learning. | http://arxiv.org/abs/2307.10093v1 |
Optimizing a machine learning pipeline for a task at hand requires careful
configuration of various hyperparameters, typically supported by an AutoML
system that optimizes the hyperparameters for the given training dataset. Yet,
depending on the AutoML system's own second-order meta-configuration, the
performance of the AutoML process can vary significantly. Current AutoML
systems cannot automatically adapt their own configuration to a specific use
case. Further, they cannot compile user-defined application constraints on the
effectiveness and efficiency of the pipeline and its generation. In this paper,
we propose CAML, which uses meta-learning to automatically adapt its own AutoML
parameters, such as the search strategy, the validation strategy, and the
search space, for a task at hand. The dynamic AutoML strategy of CAML takes
user-defined constraints into account and obtains constraint-satisfying
pipelines with high predictive performance. | http://arxiv.org/abs/2306.16913v2 |
The nature of dark matter (DM) remains one of the most important unanswered
questions in particle physics. Here, we propose a novel scenario for DM in
which weakly interacting massive particles (WIMPs) can freeze-in due to a
first-order phase transition (FOPT) in the early Universe. The FOPT dilutes the
pre-existing DM density to zero and leads to a sudden change in DM mass,
preventing WIMPs from re-equilibrating due to their large mass-to-temperature
ratio. Following the FOPT, WIMPs are produced via a freeze-in process, even
though their interactions are NOT feeble. We demonstrate this concept using a
simplified model and then apply it to a realistic model with a delayed
electroweak phase transition. Our work presents a promising new direction for
the freeze-in mechanism, and also extends the category of WIMP DM. | http://arxiv.org/abs/2304.00908v3 |
Privacy concerns have led to a surge in the creation of synthetic datasets,
with diffusion models emerging as a promising avenue. Although prior studies
have performed empirical evaluations on these models, there has been a gap in
providing a mathematical characterization of their privacy-preserving
capabilities. To address this, we present the pioneering theoretical
exploration of the privacy preservation inherent in discrete diffusion models
(DDMs) for discrete dataset generation. Focusing on per-instance differential
privacy (pDP), our framework elucidates the potential privacy leakage for each
data point in a given training dataset, offering insights into how the privacy
loss of each point correlates with the dataset's distribution. Our bounds also
show that training with $s$-sized data points leads to a surge in privacy
leakage from $(\epsilon, O(\frac{1}{s^2\epsilon}))$-pDP to $(\epsilon,
O(\frac{1}{s\epsilon}))$-pDP of the DDM during the transition from the pure
noise to the synthetic clean data phase, and a faster decay in diffusion
coefficients amplifies the privacy guarantee. Finally, we empirically verify
our theoretical findings on both synthetic and real-world datasets. | http://arxiv.org/abs/2310.15524v3 |
Image search engines enable the retrieval of images relevant to a query
image. In this work, we consider the setting where a query for similar images
is derived from a collection of images. For visual search, the similarity
measurements may be made along multiple axes, or views, such as style and
color. We assume access to a set of feature extractors, each of which computes
representations for a specific view. Our objective is to design a retrieval
algorithm that effectively combines similarities computed over representations
from multiple views. To this end, we propose a self-supervised learning method
for extracting disentangled view-specific representations for images such that
the inter-view overlap is minimized. We show how this allows us to compute the
intent of a collection as a distribution over views. We show how effective
retrieval can be performed by prioritizing candidate expansion images that
match the intent of a query collection. Finally, we present a new querying
mechanism for image search enabled by composing multiple collections and
perform retrieval under this setting using the techniques presented in this
paper. | http://arxiv.org/abs/2302.02249v1 |
We apply a new method for learning equations from data -- Exhaustive Symbolic
Regression (ESR) -- to late-type galaxy dynamics as encapsulated in the radial
acceleration relation (RAR). Relating the centripetal acceleration due to
baryons, $g_\text{bar}$, to the total dynamical acceleration, $g_\text{obs}$,
the RAR has been claimed to manifest a new law of nature due to its regularity
and tightness, in agreement with Modified Newtonian Dynamics (MOND). Fits to
this relation have been restricted by prior expectations to particular
functional forms, while ESR affords an exhaustive and nearly prior-free search
through functional parameter space to identify the equations optimally trading
accuracy with simplicity. Working with the SPARC data, we find the best
functions typically satisfy $g_\text{obs} \propto g_\text{bar}$ at high
$g_\text{bar}$, although the coefficient of proportionality is not clearly
unity and the deep-MOND limit $g_\text{obs} \propto \sqrt{g_\text{bar}}$ as
$g_\text{bar} \to 0$ is little evident at all. By generating mock data
according to MOND with or without the external field effect, we find that
symbolic regression would not be expected to identify the generating function
or reconstruct successfully the asymptotic slopes. We conclude that the limited
dynamical range and significant uncertainties of the SPARC RAR preclude a
definitive statement of its functional form, and hence that this data alone can
neither demonstrate nor rule out law-like gravitational behaviour. | http://arxiv.org/abs/2301.04368v2 |
An almost Abelian Lie group is a non-Abelian Lie group with a codimension 1
Abelian subgroup. We show that all discrete subgroups of complex simply
connected almost Abelian groups are finitely generated. The topology of
connected almost Abelian Lie groups is studied by expressing each connected
almost Abelian Lie group as a quotient of its universal covering group by a
discrete normal subgroup. We then prove that no complex connected almost
Abelian group is compact, and give conditions for the compactness of connected
subgroups of such groups. Towards studying the homotopy type of complex
connected almost Abelian groups, we investigate the maximal compact subgroups
of such groups. | http://arxiv.org/abs/2308.08059v1 |
The question under which conditions oscillators with slightly different
frequencies synchronize appears in various settings. We show that
synchronization can be achieved even for harmonic oscillators that are
bilinearly coupled via a purely dissipative interaction. By appropriately tuned
gain/loss stable dynamics may be achieved where for the cases studied in this
work all oscillators are synchronized. These findings are interpreted using the
complex eigenvalues and eigenvectors of the non-Hermitian matrix describing the
dynamics of the system. | http://arxiv.org/abs/2301.13614v1 |
Motivation: Studies including more than one type of 'omics data sets are
becoming more prevalent. Integrating these data sets can be a way to solidify
findings and even to make new discoveries. However, integrating multi-omics
data sets is challenging. Typically, data sets are integrated by performing an
all-vs-all correlation analysis, where each feature of the first data set is
correlated to each feature of the second data set. However, all-vs-all
association testing produces unstructured results that are hard to interpret,
and involves potentially unnecessary hypothesis testing that reduces
statistical power due to false discovery rate (FDR) adjustment.
Implementation: Here, we present the anansi framework, and accompanying R
package, as a way to improve upon all-vs-all association analysis. We take a
knowledge-based approach where external databases like KEGG are used to
constrain the all-vs-all association hypothesis space, only considering
pairwise associations that are a priori known to occur. This produces
structured results that are easier to interpret, and increases statistical
power by skipping unnecessary hypothesis tests. In this paper, we present the
anansi framework and demonstrate its application to learn metabolite-function
interactions in the context of host-microbe interactions. We further extend our
framework beyond pairwise association testing to differential association
testing, and show how anansi can be used to identify associations that differ
in strength or degree based on sample covariates such as case/control status.
Availability: https://github.com/thomazbastiaanssen/anansi | http://arxiv.org/abs/2305.10832v1 |
While Large Language Models (LLMs) are the dominant models for generative
tasks in language, they do not perform as well as diffusion models on image and
video generation. To effectively use LLMs for visual generation, one crucial
component is the visual tokenizer that maps pixel-space inputs to discrete
tokens appropriate for LLM learning. In this paper, we introduce MAGVIT-v2, a
video tokenizer designed to generate concise and expressive tokens for both
videos and images using a common token vocabulary. Equipped with this new
tokenizer, we show that LLMs outperform diffusion models on standard image and
video generation benchmarks including ImageNet and Kinetics. In addition, we
demonstrate that our tokenizer surpasses the previously top-performing video
tokenizer on two more tasks: (1) video compression comparable to the
next-generation video codec (VCC) according to human evaluations, and (2)
learning effective representations for action recognition tasks. | http://arxiv.org/abs/2310.05737v3 |
This paper establishes a link between endowments, patience types, and the
parameters of the HARA Bernoulli utility function that ensure equilibrium
uniqueness in an economy with two goods and two impatience types with additive
separable preferences. We provide sufficient conditions that guarantee
uniqueness of equilibrium for any possible value of $\gamma$ in the HARA
utility function
$\frac{\gamma}{1-\gamma}\left(b+\frac{a}{\gamma}x\right)^{1-\gamma}$. The
analysis contributes to the literature on uniqueness in pure exchange economies
with two-goods and two agent types and extends the result in [4]. | http://arxiv.org/abs/2308.09347v1 |
Multiwavelength observations are now the norm for studying blazars' various
states of activity, classifying them, and determining possible underlying
physical processes driving their emission. Broadband emission models became
unavoidable tools for testing emission scenarios and setting values to physical
quantities such as the magnetic field strength, Doppler factor, or shape of the
particle distribution of the emission zone(s). We announce here the first
public release of a new tool, Bjet_MCMC, that can automatically fit broadband
spectral energy distributions (SEDs) of blazars. The complete code is available
on GitHub and allows testing leptonic synchrotron self-Compton models (SSC),
with or without external inverse-Compton processes from the thermal environment
of supermassive black holes (accretion disk and broad line region). The code is
designed to be user-friendly and computationally efficient. It contains a core
written in C++ and a fully parallelized SED fitting method. The original
multi-SSC zones model of Bjet is also available on GitHub but is not included
in the MCMC fitting process at the moment. We present the features,
performance, and results of Bjet_MCMC, as well as user advice. | http://arxiv.org/abs/2307.08804v2 |
The standard paradigm of neural language generation adopts maximum likelihood
estimation (MLE) as the optimizing method. From a distributional view, MLE in
fact minimizes the Kullback-Leibler divergence (KLD) between the distribution
of the real data and that of the model. However, this approach forces the model
to distribute non-zero (sometimes large) probability mass to all training
samples regardless of their quality. Moreover, in the attempt to cover the
low-probability regions in the data distribution, the model systematically
overestimates the probability of corrupted text sequences, which we conjecture
is one of the main reasons for text degeneration during autoregressive
decoding. To remedy this problem, we leverage the total variation distance
(TVD) with its robustness to outliers, and develop practical bounds to apply it
to language generation. Then, we introduce the TaiLr objective that balances
the tradeoff of estimating TVD. Intuitively, TaiLr downweights real data
samples that have low model probabilities with tunable penalization intensity.
Experimental results show that our method alleviates the overestimation of
degenerated sequences without sacrificing diversity and improves generation
quality on a wide range of text generation tasks. | http://arxiv.org/abs/2302.13344v1 |
Segmentation and classification of cell nuclei in histopathology images using
deep neural networks (DNNs) can save pathologists' time for diagnosing various
diseases, including cancers, by automating cell counting and morphometric
assessments. It is now well-known that the accuracy of DNNs increases with the
sizes of annotated datasets available for training. Although multiple datasets
of histopathology images with nuclear annotations and class labels have been
made publicly available, the set of class labels differ across these datasets.
We propose a method to train DNNs for instance segmentation and classification
on multiple datasets where the set of classes across the datasets are related
but not the same. Specifically, our method is designed to utilize a
coarse-to-fine class hierarchy, where the set of classes labeled and annotated
in a dataset can be at any level of the hierarchy, as long as the classes are
mutually exclusive. Within a dataset, the set of classes need not even be at
the same level of the class hierarchy tree. Our results demonstrate that
segmentation and classification metrics for the class set used by the test
split of a dataset can improve by pre-training on another dataset that may even
have a different set of classes due to the expansion of the training set
enabled by our method. Furthermore, generalization to previously unseen
datasets also improves by combining multiple other datasets with different sets
of classes for training. The improvement is both qualitative and quantitative.
The proposed method can be adapted for various loss functions, DNN
architectures, and application domains. | http://arxiv.org/abs/2310.03346v1 |
The "cosmic web", the filamentary large-scale structure in a cold dark matter
Universe, is readily apparent via galaxy tracers in spectroscopic surveys.
However, the underlying dark matter structure is as of yet unobservable and
mapping the diffuse gas permeating it lies beyond practical observational
capabilities. A recently developed technique, inspired by the growth and
movement of Physarum polycephalum "slime mold", has been used to map the cosmic
web of a low redshift sub-sample of the SDSS spectroscopic galaxy catalog. This
model, the Monte Carlo Physarum Machine (MCPM) was shown to promisingly
reconstruct the cosmic web. Here, we improve the formalism used in calibrating
the MCPM to better recreate the Bolshoi-Planck cosmological simulation's
density distributions and apply them to a significantly larger cosmological
volume than previous works using the Sloan Digital Sky Survey (SDSS, $z < 0.1$)
and the Extended Baryon Oscillation Spectroscopic Survey (eBOSS) Luminous Red
Galaxy (LRG, $z \lesssim 0.5$) spectroscopic catalogs. We present the "Cosmic
Slime Value Added Catalog" which provides estimates for the cosmic overdensity
for the sample of galaxies probed spectroscopically by the above SDSS surveys.
In addition, we provide the fully reconstructed 3D density cubes of these
volumes. These data products were released as part of Sloan Digital Sky Survey
Data Release 17 and are publicly available. We present the input catalogs and
the methodology for constructing these data products. We also highlight
exciting potential applications to galaxy evolution, cosmology, the
intergalactic and circumgalactic medium, and transient phenomenon localization. | http://arxiv.org/abs/2301.02719v1 |
Denote by $N_{\cal N} (\Omega,\lambda)$ the counting function of the spectrum
of the Neumann problem in the domain $\Omega$ on the plane. G. P\'olya
conjectured that $N_{\cal N} (\Omega,\lambda) \ge (4\pi)^{-1} |\Omega|
\lambda$. We prove that for convex domains $N_{\cal N} (\Omega,\lambda) \ge (2
\sqrt 3 \,j_0^2)^{-1} |\Omega| \lambda$. Here $j_0$ is the first zero of the
Bessel function $J_0$. | http://arxiv.org/abs/2309.01432v1 |
Traditional NER systems are typically trained to recognize coarse-grained
entities, and less attention is given to classifying entities into a hierarchy
of fine-grained lower-level subtypes. This article aims to advance Arabic NER
with fine-grained entities. We chose to extend Wojood (an open-source Nested
Arabic Named Entity Corpus) with subtypes. In particular, four main entity
types in Wojood, geopolitical entity (GPE), location (LOC), organization (ORG),
and facility (FAC), are extended with 31 subtypes. To do this, we first revised
Wojood's annotations of GPE, LOC, ORG, and FAC to be compatible with the LDC's
ACE guidelines, which yielded 5, 614 changes. Second, all mentions of GPE, LOC,
ORG, and FAC (~44K) in Wojood are manually annotated with the LDC's ACE
sub-types. We refer to this extended version of Wojood as WojoodF ine. To
evaluate our annotations, we measured the inter-annotator agreement (IAA) using
both Cohen's Kappa and F1 score, resulting in 0.9861 and 0.9889, respectively.
To compute the baselines of WojoodF ine, we fine-tune three pre-trained Arabic
BERT encoders in three settings: flat NER, nested NER and nested NER with
subtypes and achieved F1 score of 0.920, 0.866, and 0.885, respectively. Our
corpus and models are open-source and available at
https://sina.birzeit.edu/wojood/. | http://arxiv.org/abs/2310.17333v2 |
Future quantum technologies such as quantum communication, quantum sensing,
and distributed quantum computation, will rely on networks of shared
entanglement between spatially separated nodes. In this work, we provide
improved protocols/policies for entanglement distribution along a linear chain
of nodes, both homogeneous and inhomogeneous, that take practical limitations
such as photon losses, non-ideal measurements, and quantum memories with short
coherence times into account. For a wide range of parameters, our policies
improve upon previously known policies, such as the "swap-as-soon-as-possible"
policy, with respect to both the waiting time and the fidelity of the
end-to-end entanglement. This improvement is greatest for the most practically
relevant cases, namely, for short coherence times, high link losses, and highly
asymmetric links. To obtain our results, we model entanglement distribution
using a Markov decision process, and then we use the Q-learning reinforcement
learning (RL) algorithm to discover new policies. These new policies are
characterized by dynamic, state-dependent memory cutoffs and collaboration
between the nodes. In particular, we quantify this collaboration between the
nodes. Our quantifiers tell us how much "global" knowledge of the network every
node has. Finally, our understanding of the performance of large quantum
networks is currently limited by the computational inefficiency of simulating
them using RL or other optimization methods. Thus, in this work, we present a
method for nesting policies in order to obtain policies for large repeater
chains. By nesting our RL-based policies for small repeater chains, we obtain
policies for large repeater chains that improve upon the
swap-as-soon-as-possible policy, and thus we pave the way for a scalable method
for obtaining policies for long-distance entanglement distribution. | http://arxiv.org/abs/2303.00777v4 |
We review a recently proposed definition of complexity of the structure of
self--gravitating fluids \cite{ch1}, and the criterium to define the simplest
mode of their evolution. We analyze the origin of these concepts and their
possible applications in the study of gravitation collapse. We start by
considering the static spherically symmetric case, extending next the study to
static axially symmetric case. Afterward we consider the non--static
spherically symmetric case. Two possible modes of evolution are proposed to be
the simplest one. One is the homologous conditio,, however, as was shown later
on, it may be useful to relax this last condition to enlarge the set of
possible solutions, by adopting the so-called quasi-homologous condition. As
another example of symmetry, we consider fluids endowed with hyperbolical
symmetry. Exact solutions for static fluid distributions satisfying the
condition of minimal complexity are presented.. An extension of the complexity
factor to the vacuum solutions of the Einstein equations represented by the
Bondi metric is discussed. A complexity hierarchy is established in this case,
ranging from the Minkowski spacetime (the simplest one) to gravitationally
radiating systems (the most complex). Finally we propose a list of questions
which, we believe, deserve to be treated in the future | http://arxiv.org/abs/2304.05870v1 |
We propose a new approach to volatility modeling by combining deep learning
(LSTM) and realized volatility measures. This LSTM-enhanced realized GARCH
framework incorporates and distills modeling advances from financial
econometrics, high frequency trading data and deep learning. Bayesian inference
via the Sequential Monte Carlo method is employed for statistical inference and
forecasting. The new framework can jointly model the returns and realized
volatility measures, has an excellent in-sample fit and superior predictive
performance compared to several benchmark models, while being able to adapt
well to the stylized facts in volatility. The performance of the new framework
is tested using a wide range of metrics, from marginal likelihood, volatility
forecasting, to tail risk forecasting and option pricing. We report on a
comprehensive empirical study using 31 widely traded stock indices over a time
period that includes COVID-19 pandemic. | http://arxiv.org/abs/2302.08002v2 |
We give a probabilistic interpretation of the configurational partition
function of the logarithmic sector of critical cosmological topologically
massive gravity, in which the Hurwitz numbers considered in our previous works
assume the role of probabilities in a distribution on cycles of permutations.
In particular, it is shown that the permutations are distributed according to
the Ewens sampling formula which plays a major role in the theory of partition
structures and their applications to diffusive processes of fragmentation, and
in random trees. This new probabilistic result together with the previously
established evidence of solitons in the theory provide new insights on the
instability originally observed in the theory. We argue that the unstable
propagation of a seed soliton at single particle level induces the generation
of fragments of defect soliton clusters with rooted tree configuration at
multiparticle level, providing a disordered landscape. The Shannon information
entropy of the probability distribution is then introduced as a measure of the
evolution of the unstable soliton clusters generated. Finally, based on
Feynman's path integral formalism on permutation symmetry in the
$\lambda$-transition of liquid helium, we argue that the existence of
permutation cycles in the configurational log partition function indicates the
presence of Bose-Einstein condensates in log gravity. | http://arxiv.org/abs/2302.07331v2 |
Shortest path (SP) computation is the fundamental operation in various
networks such as urban networks, logistic networks, communication networks,
social networks, etc. With the development of technology and societal
expansions, those networks tend to be massive. This, in turn, causes
deteriorated performance of SP computation, and graph partitioning is commonly
leveraged to scale up the SP algorithms. However, the partitioned shortest path
(PSP) index has never been systematically investigated and theoretically
analyzed, and there is a lack of experimental comparison among different PSP
indexes. Moreover, few studies have explored PSP index maintenance in dynamic
networks. Therefore, in this paper, we systematically analyze the dynamic PSP
index by proposing a universal scheme for it. Specifically, we first propose
two novel partitioned shortest path strategies (No-boundary and Post-boundary
strategies) to improve the performance of PSP indexes and design the
corresponding index maintenance approaches to deal with dynamic scenarios. Then
we categorize the partition methods from the perspective of partition structure
to facilitate the selection of partition methods in the PSP index. Furthermore,
we propose a universal scheme for designing the PSP index by coupling its three
dimensions (i.e. PSP strategy, partition structure, and SP algorithm). Based on
this scheme, we propose five new PSP indexes with prominent performance in
either query or update efficiency. Lastly, extensive experiments are
implemented to demonstrate the effectiveness of the proposed PSP scheme, with
valuable guidance provided on the PSP index design. | http://arxiv.org/abs/2310.08213v2 |
In micro-assembly applications, ensemble of chiplets immersed in a dielectric
fluid are steered using dielectrophoretic forces induced by an array of
electrode population. Generalizing the finite population deterministic models
proposed in prior works for individual chiplet position dynamics, we derive a
controlled mean field model for a continuum of chiplet population in the form
of a nonlocal, nonlinear partial differential equation. The proposed model
accounts for the stochastic forces as well as two different types of nonlocal
interactions, viz. chiplet-to-chiplet and chiplet-to-electrode interactions.
Both of these interactions are nonlinear functions of the electrode voltage
input. We prove that the deduced mean field evolution can be expressed as the
Wasserstein gradient flow of a Lyapunov-like energy functional. With respect to
this functional, the resulting dynamics is a gradient descent on the manifold
of joint population density functions with finite second moments that are
supported on the position coordinates. | http://arxiv.org/abs/2303.10564v2 |
The accurate diagnosis on pathological subtypes for lung cancer is of
significant importance for the follow-up treatments and prognosis managements.
In this paper, we propose self-generating hybrid feature network (SGHF-Net) for
accurately classifying lung cancer subtypes on computed tomography (CT) images.
Inspired by studies stating that cross-scale associations exist in the image
patterns between the same case's CT images and its pathological images, we
innovatively developed a pathological feature synthetic module (PFSM), which
quantitatively maps cross-modality associations through deep neural networks,
to derive the "gold standard" information contained in the corresponding
pathological images from CT images. Additionally, we designed a radiological
feature extraction module (RFEM) to directly acquire CT image information and
integrated it with the pathological priors under an effective feature fusion
framework, enabling the entire classification model to generate more indicative
and specific pathologically related features and eventually output more
accurate predictions. The superiority of the proposed model lies in its ability
to self-generate hybrid features that contain multi-modality image information
based on a single-modality input. To evaluate the effectiveness, adaptability,
and generalization ability of our model, we performed extensive experiments on
a large-scale multi-center dataset (i.e., 829 cases from three hospitals) to
compare our model and a series of state-of-the-art (SOTA) classification
models. The experimental results demonstrated the superiority of our model for
lung cancer subtypes classification with significant accuracy improvements in
terms of accuracy (ACC), area under the curve (AUC), and F1 score. | http://arxiv.org/abs/2308.04663v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.