text string | source string |
|---|---|
In this article, we consider machine learning algorithms to accurately
predict two variables associated with the $Q$-voter model in complex networks,
i.e., (i) the consensus time and (ii) the frequency of opinion changes.
Leveraging nine topological measures of the underlying networks, we verify that
the clustering coefficient (C) and information centrality (IC) emerge as the
most important predictors for these outcomes. Notably, the machine learning
algorithms demonstrate accuracy across three distinct initialization methods of
the $Q$-voter model, including random selection and the involvement of high-
and low-degree agents with positive opinions. By unraveling the intricate
interplay between network structure and dynamics, this research sheds light on
the underlying mechanisms responsible for polarization effects and other
dynamic patterns in social systems. Adopting a holistic approach that
comprehends the complexity of network systems, this study offers insights into
the intricate dynamics associated with polarization effects and paves the way
for investigating the structure and dynamics of complex systems through modern
machine learning methods. | http://arxiv.org/abs/2310.09131v1 |
A class of monotone operator equations, which can be decomposed into sum of a
gradient of a strongly convex function and a linear and skew-symmetric
operator, is considered in this work. Based on discretization of the
generalized gradient flow, gradient and skew-symmetric splitting (GSS) methods
are proposed and proved to convergent in linear rate. To further accelerate the
convergence, an accelerated gradient flow is proposed and accelerated gradient
and skew-symmetric splitting (AGSS) methods are developed, which extends the
acceleration among the existing works on the convex minimization to a more
general class of monotone operator equations. In particular, when applied to
smooth saddle point systems with bilinear coupling, an accelerated transformed
primal-dual (ATPD) method is proposed and shown to achieve linear rates with
optimal lower iteration complexity. | http://arxiv.org/abs/2303.09009v1 |
The Uniform Information Density (UID) principle posits that humans prefer to
spread information evenly during language production. We examine if this UID
principle can help capture differences between Large Language Models
(LLMs)-generated and human-generated texts. We propose GPT-who, the first
psycholinguistically-inspired domain-agnostic statistical detector. This
detector employs UID-based features to model the unique statistical signature
of each LLM and human author for accurate detection. We evaluate our method
using 4 large-scale benchmark datasets and find that GPT-who outperforms
state-of-the-art detectors (both statistical- & non-statistical) such as GLTR,
GPTZero, DetectGPT, OpenAI detector, and ZeroGPT by over $20$% across domains.
In addition to better performance, it is computationally inexpensive and
utilizes an interpretable representation of text articles. We find that GPT-who
can distinguish texts generated by very sophisticated LLMs, even when the
overlying text is indiscernible. UID-based measures for all datasets and code
are available at https://github.com/saranya-venkatraman/gpt-who. | http://arxiv.org/abs/2310.06202v3 |
In this paper, we introduce ProNet, an novel deep learning approach designed
for multi-horizon time series forecasting, adaptively blending autoregressive
(AR) and non-autoregressive (NAR) strategies. Our method involves dividing the
forecasting horizon into segments, predicting the most crucial steps in each
segment non-autoregressively, and the remaining steps autoregressively. The
segmentation process relies on latent variables, which effectively capture the
significance of individual time steps through variational inference. In
comparison to AR models, ProNet showcases remarkable advantages, requiring
fewer AR iterations, resulting in faster prediction speed, and mitigating error
accumulation. On the other hand, when compared to NAR models, ProNet takes into
account the interdependency of predictions in the output space, leading to
improved forecasting accuracy. Our comprehensive evaluation, encompassing four
large datasets, and an ablation study, demonstrate the effectiveness of ProNet,
highlighting its superior performance in terms of accuracy and prediction
speed, outperforming state-of-the-art AR and NAR forecasting models. | http://arxiv.org/abs/2310.19322v2 |
Classical models of spin-lattice coupling are at present unable to accurately
reproduce results for numerous properties of ferromagnetic materials, such as
heat transport coefficients or the sudden collapse of the magnetic moment in
hcp-Fe under pressure. This inability has been attributed to the absence of a
proper treatment of effects that are inherently quantum mechanical in nature,
notably spin-orbit coupling. This paper introduces a time-dependent,
non-collinear tight binding model, complete with spin-orbit coupling and vector
Stoner exchange terms, that is capable of simulating the Einstein-de Haas
effect in a ferromagnetic $\textrm{Fe}_{15}$ cluster. The tight binding model
is used to investigate the adiabaticity timescales that determine the response
of the orbital and spin angular momenta to a rotating, externally applied $B$
field, and we show that the qualitative behaviours of our simulations can be
extrapolated to realistic timescales by use of the adiabatic theorem. An
analysis of the trends in the torque contributions with respect to the field
strength demonstrates that SOC is necessary to observe a transfer of angular
momentum from the electrons to the nuclei at experimentally realistic $B$
fields. The simulations presented in this paper demonstrate the Einstein-de
Haas effect from first principles using a Fe cluster. | http://arxiv.org/abs/2308.03130v2 |
Artificial Intelligence-Generated Content (AIGC) is an automated method for
generating, manipulating, and modifying valuable and diverse data using AI
algorithms creatively. This survey paper focuses on the deployment of AIGC
applications, e.g., ChatGPT and Dall-E, at mobile edge networks, namely mobile
AIGC networks, that provide personalized and customized AIGC services in real
time while maintaining user privacy. We begin by introducing the background and
fundamentals of generative models and the lifecycle of AIGC services at mobile
AIGC networks, which includes data collection, training, finetuning, inference,
and product management. We then discuss the collaborative cloud-edge-mobile
infrastructure and technologies required to support AIGC services and enable
users to access AIGC at mobile edge networks. Furthermore, we explore
AIGCdriven creative applications and use cases for mobile AIGC networks.
Additionally, we discuss the implementation, security, and privacy challenges
of deploying mobile AIGC networks. Finally, we highlight some future research
directions and open issues for the full realization of mobile AIGC networks. | http://arxiv.org/abs/2303.16129v4 |
Non-fungible tokens (NFTs) are unique digital assets stored on the blockchain
and is used to certify ownership and authenticity of the digital asset. NFTs
were first created in 2014 while their popularity peaked between 2021 and 2022.
In this paper, the authors dive into the world of Non-Fungible Tokens (NFTs),
their history, the Future of NFTs, as well as the security concerns. | http://arxiv.org/abs/2310.15518v1 |
Utilizing exact diagonalization (ED) techniques, we investigate a
one-dimensional, non-reciprocal, interacting hard-core boson model under a
Stark potential with tail curvature. By employing the non-zero imaginary
eigenenergies ratio, half-chain entanglement entropy, and eigenstate
instability, we numerically confirm that the critical points of spectral
real-complex (RC) transition and many-body localization (MBL) phase transition
are not identical, and an examination of the phase diagrams reveals that the
spectral RC transition arises before the MBL phase transition, which suggests
the existence of a novel non-MBL-driven spectral RC transition. These findings
are quite unexpected, and they are entirely different from observations in
disorder-driven interacting non-Hermitian systems. This work provides a useful
reference for further research on phase transitions in disorder-free
interacting non-Hermitian systems. | http://arxiv.org/abs/2305.09387v3 |
Large vision-language models (VLMs), such as CLIP, learn rich joint
image-text representations, facilitating advances in numerous downstream tasks,
including zero-shot classification and text-to-image generation. Nevertheless,
existing VLMs exhibit a prominent well-documented limitation - they fail to
encapsulate compositional concepts such as counting. We introduce a simple yet
effective method to improve the quantitative understanding of VLMs, while
maintaining their overall performance on common benchmarks. Specifically, we
propose a new counting-contrastive loss used to finetune a pre-trained VLM in
tandem with its original objective. Our counting loss is deployed over
automatically-created counterfactual examples, each consisting of an image and
a caption containing an incorrect object count. For example, an image depicting
three dogs is paired with the caption "Six dogs playing in the yard". Our loss
encourages discrimination between the correct caption and its counterfactual
variant which serves as a hard negative example. To the best of our knowledge,
this work is the first to extend CLIP's capabilities to object counting.
Furthermore, we introduce "CountBench" - a new image-text counting benchmark
for evaluating a model's understanding of object counting. We demonstrate a
significant improvement over state-of-the-art baseline models on this task.
Finally, we leverage our count-aware CLIP model for image retrieval and
text-conditioned image generation, demonstrating that our model can produce
specific counts of objects more reliably than existing ones. | http://arxiv.org/abs/2302.12066v1 |
Purpose: Age biases have been identified as an essential factor in the
diagnosis of ASD. The objective of this study was to compare the effect of
different age groups in classifying ASD using morphological features (MF) and
morphological connectivity features (MCF). Methods: The structural magnetic
resonance imaging (sMRI) data for the study was obtained from the two publicly
available databases, ABIDE-I and ABIDE-II. We considered three age groups, 6 to
11, 11 to 18, and 6 to 18, for our analysis. The sMRI data was pre-processed
using a standard pipeline and was then parcellated into 148 different regions
according to the Destrieux atlas. The area, thickness, volume, and mean
curvature information was then extracted for each region which was used to
create a total of 592 MF and 10,878 MCF for each subject. Significant features
were identified using a statistical t-test (p<0.05) which was then used to
train a random forest (RF) classifier. Results: The results of our study
suggested that the performance of the 6 to 11 age group was the highest,
followed by the 6 to 18 and 11 to 18 ages in both MF and MCF. Overall, the MCF
with RF in the 6 to 11 age group performed better in the classification than
the other groups and produced an accuracy, F1 score, recall, and precision of
75.8%, 83.1%, 86%, and 80.4%, respectively. Conclusion: Our study thus
demonstrates that morphological connectivity and age-related diagnostic model
could be an effective approach to discriminating ASD. | http://arxiv.org/abs/2308.07356v1 |
In this article, we apply slope detection techniques to study properties of
toroidal $3$-manifolds obtained by performing Dehn surgeries on satellite knots
in the context of the $L$-space conjecture. We show that if $K$ is an $L$-space
knot or admits an irreducible rational surgery with non-left-orderable
fundamental group, then the JSJ graph of its exterior is a rooted interval.
Consequently, any rational surgery on a composite knot has a left-orderable
fundamental group. This is the left-orderable counterpart of Krcatovich's
result on the primeness of $L$-space knots, which we reprove using our methods.
Analogous results on the existence of co-orientable taut foliations are proved
when the knot has a fibred companion. Our results suggest a new approach to
establishing the counterpart of Krcatovich's result for surgeries with
co-orientable taut foliations, on which partial results have been achieved by
Delman and Roberts. Finally, we prove results on left-orderable $p/q$-surgeries
on knots with $p$ small. | http://arxiv.org/abs/2307.06815v4 |
The blast furnace (BF) is the fundamental tool used in the iron manufacture.
Due to the difficulty of accessing direct measurements of the inner phenomena,
we determined the density distribution of its internal volume in order to
improve its productivity using muography. This is an imaging technique based on
the differential absorption of a flux of incident particles, muons, by the
target under study, similar to clinical X-ray imaging. Muons are elementary
particles that have the property of passing through dense materials, up to
hundreds of meters away. Their relative absorption and deviation allows the
generation of density distribution images of an object by tracking the number
of muons received by a detector, before and after passing through a structure.
The incident direction of the detected muons is reconstructed by means of a
detector composed of 3 scintillator panels that we moved on 3 positions around
the BF. With this technique, we obtained the first 3D image of the internal
structure of a BF using a Markov Chain Monte Carlo (MCMC) inverse problem
solving algorithm on muon flux data. We were also able to perform a density
monitoring of the BF and some of its operating parameters. We distinguished the
position and shape of the cohesive zone, a key element in the productivity of a
furnace, validating this innovative measurement concept in the application to a
BF and opening the field to a series of future experiments to gain both spatial
and temporal resolution. | http://arxiv.org/abs/2301.04354v2 |
Quadruped animal locomotion emerges from the interactions between the spinal
central pattern generator (CPG), sensory feedback, and supraspinal drive
signals from the brain. Computational models of CPGs have been widely used for
investigating the spinal cord contribution to animal locomotion control in
computational neuroscience and in bio-inspired robotics. However, the
contribution of supraspinal drive to anticipatory behavior, i.e. motor behavior
that involves planning ahead of time (e.g. of footstep placements), is not yet
properly understood. In particular, it is not clear whether the brain modulates
CPG activity and/or directly modulates muscle activity (hence bypassing the
CPG) for accurate foot placements. In this paper, we investigate the
interaction of supraspinal drive and a CPG in an anticipatory locomotion
scenario that involves stepping over gaps. By employing deep reinforcement
learning (DRL), we train a neural network policy that replicates the
supraspinal drive behavior. This policy can either modulate the CPG dynamics,
or directly change actuation signals to bypass the CPG dynamics. Our results
indicate that the direct supraspinal contribution to the actuation signal is a
key component for a high gap crossing success rate. However, the CPG dynamics
in the spinal cord are beneficial for gait smoothness and energy efficiency.
Moreover, our investigation shows that sensing the front feet distances to the
gap is the most important and sufficient sensory information for learning gap
crossing. Our results support the biological hypothesis that cats and horses
mainly control the front legs for obstacle avoidance, and that hind limbs
follow an internal memory based on the front limbs' information. Our method
enables the quadruped robot to cross gaps of up to 20 cm (50% of body-length)
without any explicit dynamics modeling or Model Predictive Control (MPC). | http://arxiv.org/abs/2302.13378v1 |
In two and three dimensions, we design and analyze a posteriori error
estimators for the mixed Stokes eigenvalue problem. The unknowns on this mixed
formulation are the pseudotress, velocity and pressure. With a lowest order
mixed finite element scheme, together with a postprocressing technique, we
prove that the proposed estimator is reliable and efficient. We illustrate the
results with several numerical tests in two and three dimensions in order to
assess the performance of the estimator. | http://arxiv.org/abs/2310.13169v1 |
Self-supervised learning (SSL) speech models such as wav2vec and HuBERT have
demonstrated state-of-the-art performance on automatic speech recognition (ASR)
and proved to be extremely useful in low label-resource settings. However, the
success of SSL models has yet to transfer to utterance-level tasks such as
speaker, emotion, and language recognition, which still require supervised
fine-tuning of the SSL models to obtain good performance. We argue that the
problem is caused by the lack of disentangled representations and an
utterance-level learning objective for these tasks. Inspired by how HuBERT uses
clustering to discover hidden acoustic units, we formulate a factor analysis
(FA) model that uses the discovered hidden acoustic units to align the SSL
features. The underlying utterance-level representations are disentangled from
the content of speech using probabilistic inference on the aligned features.
Furthermore, the variational lower bound derived from the FA model provides an
utterance-level objective, allowing error gradients to be backpropagated to the
Transformer layers to learn highly discriminative acoustic units. When used in
conjunction with HuBERT's masked prediction training, our models outperform the
current best model, WavLM, on all utterance-level non-semantic tasks on the
SUPERB benchmark with only 20% of labeled data. | http://arxiv.org/abs/2305.08099v3 |
This paper introduces an approach that combines the language reasoning
capabilities of large language models (LLMs) with the benefits of local
training to tackle complex, domain-specific tasks. Specifically, the authors
demonstrate their approach by extracting structured condition codes from
pathology reports. The proposed approach utilizes local LLMs, which can be
fine-tuned to respond to specific generative instructions and provide
structured outputs. The authors collected a dataset of over 150k uncurated
surgical pathology reports, containing gross descriptions, final diagnoses, and
condition codes. They trained different model architectures, including LLaMA,
BERT and LongFormer and evaluated their performance. The results show that the
LLaMA-based models significantly outperform BERT-style models across all
evaluated metrics, even with extremely reduced precision. The LLaMA models
performed especially well with large datasets, demonstrating their ability to
handle complex, multi-label tasks. Overall, this work presents an effective
approach for utilizing LLMs to perform domain-specific tasks using accessible
hardware, with potential applications in the medical domain, where complex data
extraction and classification are required. | http://arxiv.org/abs/2308.01727v1 |
Although deep learning (DL) models have shown great success in many medical
image analysis tasks, deployment of the resulting models into real clinical
contexts requires: (1) that they exhibit robustness and fairness across
different sub-populations, and (2) that the confidence in DL model predictions
be accurately expressed in the form of uncertainties. Unfortunately, recent
studies have indeed shown significant biases in DL models across demographic
subgroups (e.g., race, sex, age) in the context of medical image analysis,
indicating a lack of fairness in the models. Although several methods have been
proposed in the ML literature to mitigate a lack of fairness in DL models, they
focus entirely on the absolute performance between groups without considering
their effect on uncertainty estimation. In this work, we present the first
exploration of the effect of popular fairness models on overcoming biases
across subgroups in medical image analysis in terms of bottom-line performance,
and their effects on uncertainty quantification. We perform extensive
experiments on three different clinically relevant tasks: (i) skin lesion
classification, (ii) brain tumour segmentation, and (iii) Alzheimer's disease
clinical score regression. Our results indicate that popular ML methods, such
as data-balancing and distributionally robust optimization, succeed in
mitigating fairness issues in terms of the model performances for some of the
tasks. However, this can come at the cost of poor uncertainty estimates
associated with the model predictions. This tradeoff must be mitigated if
fairness models are to be adopted in medical image analysis. | http://arxiv.org/abs/2303.03242v1 |
Neural networks functions are supposed to be able to encode the desired
solution of an inverse problem very efficiently. In this paper, we consider the
problem of solving linear inverse problems with neural network coders. First we
establish some correspondences of this formulation with existing concepts in
regularization theory, in particular with state space regularization, operator
decomposition and iterative regularization methods. A Gauss-Newton's method is
suitable for solving encoded linear inverse problems, which is supported by a
local convergence result. The convergence studies, however, are not complete,
and are based on a conjecture on linear independence of activation functions
and its derivatives. | http://arxiv.org/abs/2303.14058v1 |
In this paper, the notion of contraction is used to solve the
trajectory-tracking problem for a class of mechanical systems. Additionally, we
propose a dynamic extension to remove velocity measurements from the controller
while rejecting matched disturbances. In particular, we propose three control
designs stemming from the Interconnection and Damping Assignment
Passivity-Based Control approach. The first controller is a tracker that does
not require velocity measurements. The second control design solves the
trajectory-tracking problem while guaranteeing robustness with respect to
matched disturbances. Then, the third approach is a combination of both
mentioned controllers. It is shown that all proposed design methods guarantee
exponential convergence of the mechanical system to the desired (feasible)
trajectory due to the contraction property of the closed-loop system. The
applicability of this method is illustrated via the design of a controller for
an underactuated mechanical system. | http://arxiv.org/abs/2304.09910v2 |
Sun-like stars shed angular momentum due to the presence of magnetised
stellar winds. Magnetohydrodynamic models have been successful in exploring the
dependence of this "wind-braking torque" on various stellar properties, however
the influence of surface differential rotation is largely unexplored. As the
wind-braking torque depends on the rotation rate of the escaping wind, the
inclusion of differential rotation should effectively modulate the angular
momentum-loss rate based on the latitudinal variation of wind source regions.
In order to quantify the influence of surface differential rotation on the
angular momentum-loss rate of the Sun, we exploit the dependence of the
wind-braking torque on the effective rotation rate of the coronal magnetic
field. This quantity is evaluated by tracing field lines through a Potential
Field Source Surface (PFSS) model, driven by ADAPT-GONG magnetograms. The
surface rotation rates of the open magnetic field lines are then used to
construct an open-flux weighted rotation rate, from which the influence on the
wind-braking torque can be estimated. During solar minima, the rotation rate of
the corona decreases with respect to the typical solid-body rate (the
Carrington rotation period is 25.4 days), as the sources of the solar wind
shift towards the slowly-rotating poles. With increasing activity, more solar
wind emerges from the Sun's active latitudes which enforces a Carrington-like
rotation. The effect of differential rotation on the Sun's current wind-braking
torque is found to be small. The wind-braking torque is ~10-15% lower during
solar minimum, than assuming solid body rotation, and a few percent larger
during solar maximum. For more rapidly-rotating Sun-like stars, differential
rotation may play a more significant role, depending on the configuration of
the large-scale magnetic field. | http://arxiv.org/abs/2302.12700v1 |
Beeping models are models for networks of weak devices, such as sensor
networks or biological networks. In these networks, nodes are allowed to
communicate only via emitting beeps: unary pulses of energy. Listening nodes
only the capability of {\it carrier sensing}: they can only distinguish between
the presence or absence of a beep, but receive no other information. The noisy
beeping model further assumes listening nodes may be disrupted by random noise.
Despite this extremely restrictive communication model, it transpires that
complex distributed tasks can still be performed by such networks. In this
paper we provide an optimal procedure for simulating general message passing in
the beeping and noisy beeping models. We show that a round of \textsf{Broadcast
CONGEST} can be simulated in $O(\Delta\log n)$ round of the noisy (or
noiseless) beeping model, and a round of \textsf{CONGEST} can be simulated in
$O(\Delta^2\log n)$ rounds (where $\Delta$ is the maximum degree of the
network). We also prove lower bounds demonstrating that no simulation can use
asymptotically fewer rounds.
This allows a host of graph algorithms to be efficiently implemented in
beeping models. As an example, we present an $O(\log n)$-round
\textsf{Broadcast CONGEST} algorithm for maximal matching, which, when
simulated using our method, immediately implies a near-optimal $O(\Delta \log^2
n)$-round maximal matching algorithm in the noisy beeping model. | http://arxiv.org/abs/2303.15346v1 |
Can graph neural networks generalize to graphs that are different from the
graphs they were trained on, e.g., in size? In this work, we study this
question from a theoretical perspective. While recent work established such
transferability and approximation results via graph limits, e.g., via graphons,
these only apply non-trivially to dense graphs. To include frequently
encountered sparse graphs such as bounded-degree or power law graphs, we take a
perspective of taking limits of operators derived from graphs, such as the
aggregation operation that makes up GNNs. This leads to the recently introduced
limit notion of graphops (Backhausz and Szegedy, 2022). We demonstrate how the
operator perspective allows us to develop quantitative bounds on the distance
between a finite GNN and its limit on an infinite graph, as well as the
distance between the GNN on graphs of different sizes that share structural
properties, under a regularity assumption verified for various graph sequences.
Our results hold for dense and sparse graphs, and various notions of graph
limits. | http://arxiv.org/abs/2306.04495v1 |
Source separation involves the ill-posed problem of retrieving a set of
source signals that have been observed through a mixing operator. Solving this
problem requires prior knowledge, which is commonly incorporated by imposing
regularity conditions on the source signals, or implicitly learned through
supervised or unsupervised methods from existing data. While data-driven
methods have shown great promise in source separation, they often require large
amounts of data, which rarely exists in planetary space missions. To address
this challenge, we propose an unsupervised source separation scheme for domains
with limited data access that involves solving an optimization problem in the
wavelet scattering covariance representation space$\unicode{x2014}$an
interpretable, low-dimensional representation of stationary processes. We
present a real-data example in which we remove transient, thermally-induced
microtilts$\unicode{x2014}$known as glitches$\unicode{x2014}$from data recorded
by a seismometer during NASA's InSight mission on Mars. Thanks to the wavelet
scattering covariances' ability to capture non-Gaussian properties of
stochastic processes, we are able to separate glitches using only a few
glitch-free data snippets. | http://arxiv.org/abs/2301.11981v2 |
Despite the remarkable progress in semantic segmentation tasks with the
advancement of deep neural networks, existing U-shaped hierarchical typical
segmentation networks still suffer from local misclassification of categories
and inaccurate target boundaries. In an effort to alleviate this issue, we
propose a Model Doctor for semantic segmentation problems. The Model Doctor is
designed to diagnose the aforementioned problems in existing pre-trained models
and treat them without introducing additional data, with the goal of refining
the parameters to achieve better performance. Extensive experiments on several
benchmark datasets demonstrate the effectiveness of our method. Code is
available at \url{https://github.com/zhijiejia/SegDoctor}. | http://arxiv.org/abs/2302.08980v2 |
In this article, we study a mathematical system which models the dynamic of
the collective behaviour of oxygen-driven swimming bacteria in an aquatic fluid
flowing in a two dimensional bounded domain under stochastic perturbation. This
model can be seen as a stochastic version of Chemotaxis-Navier-Stokes model. We
prove the existence of a unique (probabilistic) strong solution. In addition,
we establish some properties of the strong solution. More precisely, we prove
that the unique solution is non-negative and satisfies the mass conservation
property and an energy inequality. | http://arxiv.org/abs/2301.00654v1 |
Positional reasoning is the process of ordering unsorted parts contained in a
set into a consistent structure. We present Positional Diffusion, a
plug-and-play graph formulation with Diffusion Probabilistic Models to address
positional reasoning. We use the forward process to map elements' positions in
a set to random positions in a continuous space. Positional Diffusion learns to
reverse the noising process and recover the original positions through an
Attention-based Graph Neural Network. We conduct extensive experiments with
benchmark datasets including two puzzle datasets, three sentence ordering
datasets, and one visual storytelling dataset, demonstrating that our method
outperforms long-lasting research on puzzle solving with up to +18% compared to
the second-best deep learning method, and performs on par against the
state-of-the-art methods on sentence ordering and visual storytelling. Our work
highlights the suitability of diffusion models for ordering problems and
proposes a novel formulation and method for solving various ordering tasks.
Project website at https://iit-pavis.github.io/Positional_Diffusion/ | http://arxiv.org/abs/2303.11120v1 |
Among the performance-enhancing procedures for Hopfield-type networks that
implement associative memory, Hebbian Unlearning (or dreaming) strikes for its
simplicity and its clear biological interpretation. Yet, it does not easily
lend itself to a clear analytical understanding. Here we show how Hebbian
Unlearning can be effectively described in terms of a simple evolution of the
spectrum and the eigenvectors of the coupling matrix. We use these ideas to
design new dreaming algorithms that are effective from a computational point of
view, and are analytically far more transparent than the original scheme. | http://arxiv.org/abs/2308.13445v1 |
The Ornstein-Zernike integral equation method has been employed for a
single-component hard sphere fluid in terms of the Percus-Yevick (PY) and
Martynov-Sarkisov (MS) approximations. Virial equation of state has been
computed in both approximations. An excess chemical potential has been
calculated with an analytical expression based on correlation functions, and
the entropy has been computed with a thermodynamic relation. Calculations have
been carried out for a reduced densities of 0.1 to 0.9. It has been shown that
the MS approximation gives better values than those from the PY approximation,
especially for high densities and presents a reasonable comparison with
available data in the literature. | http://arxiv.org/abs/2306.05953v1 |
We prove that for any $d>0$ there exists an embedding of the Riemann sphere
$\mathbb P^1$ in a smooth complex surface, with self-intersection $d$, such
that the germ of this embedding cannot be extended to an embedding in an
algebraic surface but the field of germs of meromorphic functions along $C$ has
transcendence degree $2$ over $\mathbb C$. We give two different constructions
of such neighborhoods, either as blowdowns of a neighborhood of the smooth
plane conic, or as ramified coverings of a neighborhood of a hyperplane section
of a surface of minimal degree. The proofs of non-algebraicity of these
neighborhoods are based on a classification, up to isomorphism, of algebraic
germs of embeddings of $\mathbb P^1$, which is also obtained in the paper. | http://arxiv.org/abs/2301.10447v3 |
We present a new algorithm, Cross-Episodic Curriculum (CEC), to boost the
learning efficiency and generalization of Transformer agents. Central to CEC is
the placement of cross-episodic experiences into a Transformer's context, which
forms the basis of a curriculum. By sequentially structuring online learning
trials and mixed-quality demonstrations, CEC constructs curricula that
encapsulate learning progression and proficiency increase across episodes. Such
synergy combined with the potent pattern recognition capabilities of
Transformer models delivers a powerful cross-episodic attention mechanism. The
effectiveness of CEC is demonstrated under two representative scenarios: one
involving multi-task reinforcement learning with discrete control, such as in
DeepMind Lab, where the curriculum captures the learning progression in both
individual and progressively complex settings; and the other involving
imitation learning with mixed-quality data for continuous control, as seen in
RoboMimic, where the curriculum captures the improvement in demonstrators'
expertise. In all instances, policies resulting from CEC exhibit superior
performance and strong generalization. Code is open-sourced at
https://cec-agent.github.io/ to facilitate research on Transformer agent
learning. | http://arxiv.org/abs/2310.08549v1 |
We prove a formula for the ${\mathbb S}_n$-equivariant Euler characteristic
of the moduli space of graphs $\mathcal{MG}_{g,n}$. Moreover, we prove that the
rational ${\mathbb S}_n$-invariant cohomology of $\mathcal{MG}_{g,n}$
stabilizes for large $n$. That means, if $n \geq g \geq 2$, then there are
isomorphisms $H^k(\mathcal{MG}_{g,n};\mathbb{Q})^{{\mathbb S}_n} \rightarrow
H^k(\mathcal{MG}_{g,n+1};\mathbb{Q})^{{\mathbb S}_{n+1}}$ for all $k$. | http://arxiv.org/abs/2306.15598v3 |
The vibrational density of states of glasses is considerably different from
that of crystals. In particular, there exist spatially localized vibrational
modes in glasses. The density of states of these non-phononic modes has been
observed to follow $g(\omega) \propto \omega^4$, where $\omega$ is the
frequency. However, in two-dimensional systems, the abundance of phonons makes
it difficult to accurately determine this non-phononic density of states
because they are strongly coupled to non-phononic modes and yield strong
system-size and preparation-protocol dependencies. In this article, we utilize
the random pinning method to suppress phonons and disentangle their coupling
with non-phononic modes and successfully calculate their density of states as
$g(\omega) \propto \omega^4$. We also study their localization properties and
confirm that low-frequency non-phononic modes in pinned systems are truly
localized without far-field contributions. We finally discuss the excess
density of states over the Debye value that results from the hybridization of
phonons and non-phononic modes. | http://arxiv.org/abs/2301.06225v1 |
We present a multi-modal stress dataset that uses digital job interviews to
induce stress. The dataset provides multi-modal data of 40 participants
including audio, video (motion capturing, facial recognition, eye tracking) as
well as physiological information (photoplethysmography, electrodermal
activity). In addition to that, the dataset contains time-continuous
annotations for stress and occurred emotions (e.g. shame, anger, anxiety,
surprise). In order to establish a baseline, five different machine learning
classifiers (Support Vector Machine, K-Nearest Neighbors, Random Forest,
Long-Short-Term Memory Network) have been trained and evaluated on the proposed
dataset for a binary stress classification task. The best-performing classifier
achieved an accuracy of 88.3% and an F1-score of 87.5%. | http://arxiv.org/abs/2303.07742v1 |
Large language models (LLMs) have formulated a blueprint for the advancement
of artificial general intelligence. Its primary objective is to function as a
human-centric (helpful, honest, and harmless) assistant. Alignment with humans
assumes paramount significance, and reinforcement learning with human feedback
(RLHF) emerges as the pivotal technological paradigm underpinning this pursuit.
Current technical routes usually include \textbf{reward models} to measure
human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize
policy model outputs, and \textbf{process supervision} to improve step-by-step
reasoning capabilities. However, due to the challenges of reward design,
environment interaction, and agent training, coupled with huge trial and error
cost of large language models, there is a significant barrier for AI
researchers to motivate the development of technical alignment and safe landing
of LLMs. The stable training of RLHF has still been a puzzle. In the first
report, we dissect the framework of RLHF, re-evaluate the inner workings of
PPO, and explore how the parts comprising PPO algorithms impact policy agent
training. We identify policy constraints being the key factor for the effective
implementation of the PPO algorithm. Therefore, we explore the PPO-max, an
advanced version of PPO algorithm, to efficiently improve the training
stability of the policy model. Based on our main results, we perform a
comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT.
The absence of open-source implementations has posed significant challenges to
the investigation of LLMs alignment. Therefore, we are eager to release
technical reports, reward models and PPO codes, aiming to make modest
contributions to the advancement of LLMs. | http://arxiv.org/abs/2307.04964v2 |
Modeling sounds emitted from physical object interactions is critical for
immersive perceptual experiences in real and virtual worlds. Traditional
methods of impact sound synthesis use physics simulation to obtain a set of
physics parameters that could represent and synthesize the sound. However, they
require fine details of both the object geometries and impact locations, which
are rarely available in the real world and can not be applied to synthesize
impact sounds from common videos. On the other hand, existing video-driven deep
learning-based approaches could only capture the weak correspondence between
visual content and impact sounds since they lack of physics knowledge. In this
work, we propose a physics-driven diffusion model that can synthesize
high-fidelity impact sound for a silent video clip. In addition to the video
content, we propose to use additional physics priors to guide the impact sound
synthesis procedure. The physics priors include both physics parameters that
are directly estimated from noisy real-world impact sound examples without
sophisticated setup and learned residual parameters that interpret the sound
environment via neural networks. We further implement a novel diffusion model
with specific training and inference strategies to combine physics priors and
visual information for impact sound synthesis. Experimental results show that
our model outperforms several existing systems in generating realistic impact
sounds. More importantly, the physics-based representations are fully
interpretable and transparent, thus enabling us to perform sound editing
flexibly. | http://arxiv.org/abs/2303.16897v3 |
Airport service quality evaluation is commonly found on social media,
including Google Maps. This valuable for airport management in order to enhance
the quality of services provided. However; prior studies either provide general
review for topics discussed by travellers or provide sentimental value to tag
the entire review without specifically mentioning the airport service that is
behind such value. Accordingly, this work proposes using aspect based
sentimental analysis in order to provide more detailed analysis for travellers
reviews. This works applied aspect based sentimental analysis on data collected
from Google Map about Dubai and Doha airports. The results provide tangible
reasons to use aspect based sentimental analysis in order to understand more
the travellers and spot airport services that are in need for improvement. | http://arxiv.org/abs/2308.02548v1 |
Combining sum factorization, weighted quadrature, and row-based assembly
enables efficient higher-order computations for tensor product splines. We aim
to transfer these concepts to immersed boundary methods, which perform
simulations on a regular background mesh cut by a boundary representation that
defines the domain of interest. Therefore, we present a novel concept to divide
the support of cut basis functions to obtain regular parts suited for sum
factorization. These regions require special discontinuous weighted quadrature
rules, while Gauss-like quadrature rules integrate the remaining support. Two
linear elasticity benchmark problems confirm the derived estimate for the
computational costs of the different integration routines and their
combination. Although the presence of cut elements reduces the speed-up, its
contribution to the overall computation time declines with h-refinement. | http://arxiv.org/abs/2308.15034v1 |
Ensuring the safety of the equipment, its environment and most importantly,
the operator during robot operations is of paramount importance. Robots and
complex robotic systems are appearing in more and more industrial and
professional service applications. However, while mechanical components and
control systems are advancing rapidly, the legislation background and standards
framework for such systems and machinery are lagging behind. As part of a
fundamental research work targeting industrial robots and industry 4.0
solutions for completely automated slaughtering, it was revealed that there are
no particular standards addressing robotics systems applied to the agrifood
domain. More specifically, within the agrifood sector, the only standards
existing for the meat industry and the red meat sector are hygienic standards
related to machinery. None of the identified standards or regulations consider
the safety of autonomous robot operations or human robot collaborations in the
abattoirs. The goal of this paper is to provide a general overview of the
regulations and standards (and similar guiding documents) relevant for such
applications, that could possibly be used as guidelines during the development
of inherently safe robotic systems for abattoirs. Reviewing and summarizing the
relevant standard and legislation landscape should also offer some instrumental
help regarding the foreseen certification procedure of meat processing robots
and robot cells for slaughterhouses in the near future. | http://arxiv.org/abs/2304.14014v1 |
We aim to produce a smaller language model that is aligned to user intent.
Previous research has shown that applying distilled supervised fine-tuning
(dSFT) on larger models significantly improves task accuracy; however, these
models are unaligned, i.e. they do not respond well to natural prompts. To
distill this property, we experiment with the use of preference data from AI
Feedback (AIF). Starting from a dataset of outputs ranked by a teacher model,
we apply distilled direct preference optimization (dDPO) to learn a chat model
with significantly improved intent alignment. The approach requires only a few
hours of training without any additional sampling during fine-tuning. The final
result, Zephyr-7B, sets the state-of-the-art on chat benchmarks for 7B
parameter models, and requires no human annotation. In particular, results on
MT-Bench show that Zephyr-7B surpasses Llama2-Chat-70B, the best open-access
RLHF-based model. Code, models, data, and tutorials for the system are
available at https://github.com/huggingface/alignment-handbook. | http://arxiv.org/abs/2310.16944v1 |
We report on a novel phase-locking technique for fiber-based Mach-Zehnder
interferometers based on discrete single-photon detections, and demonstrate
this in a setup. Our interferometer decodes relative-phase-encoded optical
pulse pairs for quantum key distribution applications and requires no locking
laser in addition to the weak received signal. Our new simple locking scheme is
shown to produce an Ornstein-Uhlenbeck dynamic and achieve optimal phase noise
for a given count rate. In case of wavelength drifts that arise during the
reception of Doppler-shifted satellite signals, the arm-length difference gets
continuously readjusted to keep the interferometer phase stable. | http://arxiv.org/abs/2305.03641v2 |
The reliability of fast repeated erasures is studied experimentally and
theoretically in a 1-bit underdamped memory. The bit is encoded by the position
of a micro-mechanical oscillator whose motion is confined in a double well
potential. To contain the energetic cost of fast erasures, we use a resonator
with high quality factor $Q$: the erasure work $W$ is close to Landauer's
bound, even at high speed. The drawback is the rise of the system's temperature
$T$ due to a weak coupling to the environment. Repeated erasures without
letting the memory thermalize between operations result in a continuous
warming, potentially leading to a thermal noise overcoming the barrier between
the potential wells. In such case, the reset operation can fail to reach the
targeted logical state. The reliability is characterized by the success rate
$R^s_i$ after $i$ successive operations. $W$, $T$ and $R^s_i$ are studied
experimentally as a function of the erasure speed. Above a velocity threshold,
$T$ soars while $R^s_i$ collapses: the reliability of too fast erasures is low.
These experimental results are fully justified by two complementary models. We
demonstrate that $Q\simeq 10$ is optimal to contain energetic costs and
maintain high reliability standards for repeated erasures at any speed. | http://arxiv.org/abs/2306.15573v2 |
Background: Despite the widespread use of automated security defect detection
tools, software projects still contain many security defects that could result
in serious damage. Such tools are largely context-insensitive and may not cover
all possible scenarios in testing potential issues, which makes them
susceptible to missing complex security defects. Hence, thorough detection
entails a synergistic cooperation between these tools and human-intensive
detection techniques, including code review. Code review is widely recognized
as a crucial and effective practice for identifying security defects. Aim: This
work aims to empirically investigate security defect detection through code
review. Method: To this end, we conducted an empirical study by analyzing code
review comments derived from four projects in the OpenStack and Qt communities.
Through manually checking 20,995 review comments obtained by keyword-based
search, we identified 614 comments as security-related. Results: Our results
show that (1) security defects are not prevalently discussed in code review,
(2) more than half of the reviewers provided explicit fixing
strategies/solutions to help developers fix security defects, (3) developers
tend to follow reviewers' suggestions and action the changes, (4) Not worth
fixing the defect now and Disagreement between the developer and the reviewer
are the main causes for not resolving security defects. Conclusions: Our
research results demonstrate that (1) software security practices should
combine manual code review with automated detection tools, achieving a more
comprehensive coverage to identifying and addressing security defects, and (2)
promoting appropriate standardization of practitioners' behaviors during code
review remains necessary for enhancing software security. | http://arxiv.org/abs/2307.02326v1 |
We study one generator quasi-cyclic codes and four-circulant codes, which are
also quasi-cyclic but have two generators. We state the hull dimensions for
both classes of codes in terms of the polynomials in their generating elements.
We prove results such as the hull dimension of a four-circulant code is even
and one-dimensional hull for double-circulant codes, which are special one
generator codes, is not possible when the alphabet size $q$ is congruent to 3
mod 4. We also characterize linear complementary pairs among both classes of
codes. Computational results on the code families in consideration are provided
as well. | http://arxiv.org/abs/2307.05449v2 |
We present a theoretical investigation of the Vavilov-Cherenkov (VC)
radiation by a plane-wave or twisted electron. Special emphasis is put on the
question whether and at what conditions the emitted VC photons can be twisted.
For this aim we obtain a general expression in the coordinate and momentum
representations for the quantum state of the final electron-photon system that
is a result of the radiation process itself and does not depend on the
properties of a detector. It is shown that this evolved state is an entangled
state of an electron and a photon, and both particles can be twisted. A direct
consequence of this result follows: if one uses a detector sensitive to the
twisted electron (photon) with the definite projection of the total angular
momentum (TAM), then the final photon (electron) also will be in the twisted
state with a definite TAM projection. Further, we investigate the polarization
properties of the final twisted photon in more general conditions than has been
calculated before. Finally, we exploit a close similarity between the discussed
VC radiation and the process of the equivalent photon emission in the
Weizs\"acker-Williams method and find the corresponding final state. | http://arxiv.org/abs/2310.09864v2 |
Deep learning has been applied to compressive sensing (CS) of images
successfully in recent years. However, existing network-based methods are often
trained as the black box, in which the lack of prior knowledge is often the
bottleneck for further performance improvement. To overcome this drawback, this
paper proposes a novel CS method using non-local prior which combines the
interpretability of the traditional optimization methods with the speed of
network-based methods, called NL-CS Net. We unroll each phase from iteration of
the augmented Lagrangian method solving non-local and sparse regularized
optimization problem by a network. NL-CS Net is composed of the up-sampling
module and the recovery module. In the up-sampling module, we use learnable
up-sampling matrix instead of a predefined one. In the recovery module,
patch-wise non-local network is employed to capture long-range feature
correspondences. Important parameters involved (e.g. sampling matrix, nonlinear
transforms, shrinkage thresholds, step size, $etc.$) are learned end-to-end,
rather than hand-crafted. Furthermore, to facilitate practical implementation,
orthogonal and binary constraints on the sampling matrix are simultaneously
adopted. Extensive experiments on natural images and magnetic resonance imaging
(MRI) demonstrate that the proposed method outperforms the state-of-the-art
methods while maintaining great interpretability and speed. | http://arxiv.org/abs/2305.03899v1 |
Celtic knots are an ancient art form often attributed to Celtic cultures,
used to decorate monuments and manuscripts, and to symbolise eternity and
interconnectedness. This paper describes the framework CelticGraph to draw
graphs as Celtic knots and links. The drawing process raises interesting
combinatorial concepts in the theory of circuits in planar graphs. Further,
CelticGraph uses a novel algorithm to represent edges as B\'ezier curves,
aiming to show each link as a smooth curve with limited curvature. | http://arxiv.org/abs/2309.02852v2 |
Recently, growing interest has been aroused in extending the multimodal
capability of large language models (LLMs), e.g., vision-language (VL)
learning, which is regarded as the next milestone of artificial general
intelligence. However, existing solutions are prohibitively expensive, which
not only need to optimize excessive parameters, but also require another
large-scale pre-training before VL instruction tuning. In this paper, we
propose a novel and affordable solution for the effective VL adaption of LLMs,
called Mixture-of-Modality Adaptation (MMA). Instead of using large neural
networks to connect the image encoder and LLM, MMA adopts lightweight modules,
i.e., adapters, to bridge the gap between LLMs and VL tasks, which also enables
the joint optimization of the image and language models. Meanwhile, MMA is also
equipped with a routing algorithm to help LLMs achieve an automatic shift
between single- and multi-modal instructions without compromising their ability
of natural language understanding. To validate MMA, we apply it to a recent LLM
called LLaMA and term this formed large vision-language instructed model as
LaVIN. To validate MMA and LaVIN, we conduct extensive experiments under two
setups, namely multimodal science question answering and multimodal dialogue.
The experimental results not only demonstrate the competitive performance and
the superior training efficiency of LaVIN than existing multimodal LLMs, but
also confirm its great potential as a general-purpose chatbot. More
importantly, the actual expenditure of LaVIN is extremely cheap, e.g., only 1.4
training hours with 3.8M trainable parameters, greatly confirming the
effectiveness of MMA. Our project is released at
https://luogen1996.github.io/lavin. | http://arxiv.org/abs/2305.15023v3 |
Despite the growing use of transformer models in computer vision, a
mechanistic understanding of these networks is still needed. This work
introduces a method to reverse-engineer Vision Transformers trained to solve
image classification tasks. Inspired by previous research in NLP, we
demonstrate how the inner representations at any level of the hierarchy can be
projected onto the learned class embedding space to uncover how these networks
build categorical representations for their predictions. We use our framework
to show how image tokens develop class-specific representations that depend on
attention mechanisms and contextual information, and give insights on how
self-attention and MLP layers differentially contribute to this categorical
composition. We additionally demonstrate that this method (1) can be used to
determine the parts of an image that would be important for detecting the class
of interest, and (2) exhibits significant advantages over traditional linear
probing approaches. Taken together, our results position our proposed framework
as a powerful tool for mechanistic interpretability and explainability
research. | http://arxiv.org/abs/2310.18969v1 |
Prompt tuning for pre-trained masked language models (MLM) has shown
promising performance in natural language processing tasks with few labeled
examples. It tunes a prompt for the downstream task, and a verbalizer is used
to bridge the predicted token and label prediction. Due to the limited training
data, prompt initialization is crucial for prompt tuning. Recently,
MetaPrompting (Hou et al., 2022) uses meta-learning to learn a shared
initialization for all task-specific prompts. However, a single initialization
is insufficient to obtain good prompts for all tasks and samples when the tasks
are complex. Moreover, MetaPrompting requires tuning the whole MLM, causing a
heavy burden on computation and memory as the MLM is usually large. To address
these issues, we use a prompt pool to extract more task knowledge and construct
instance-dependent prompts via attention. We further propose a novel soft
verbalizer (RepVerb) which constructs label embedding from feature embeddings
directly. Combining meta-learning the prompt pool and RepVerb, we propose
MetaPrompter for effective structured prompting. MetaPrompter is
parameter-efficient as only the pool is required to be tuned. Experimental
results demonstrate that MetaPrompter performs better than the recent
state-of-the-arts and RepVerb outperforms existing soft verbalizers. | http://arxiv.org/abs/2306.00618v2 |
Psychoactive substances, which influence the brain to alter perceptions and
moods, have the potential to have positive and negative effects on critical
software engineering tasks. They are widely used in software, but that use is
not well understood. We present the results of the first qualitative
investigation of the experiences of, and challenges faced by, psychoactive
substance users in professional software communities. We conduct a thematic
analysis of hour-long interviews with 26 professional programmers who use
psychoactive substances at work. Our results provide insight into individual
motivations and impacts, including mental health and the relationships between
various substances and productivity. Our findings elaborate on socialization
effects, including soft skills, stigma, and remote work. The analysis also
highlights implications for organizational policy, including positive and
negative impacts on recruitment and retention. By exploring individual usage
motivations, social and cultural ramifications, and organizational policy, we
demonstrate how substance use can permeate all levels of software development. | http://arxiv.org/abs/2305.01056v1 |
Harnessing the optoelectronic response of organic semiconductors requires a
thorough understanding of the fundamental light-matter interaction that is
dominated by the excitation of correlated electron-hole pairs, i.e. excitons.
The nature of these excitons would be fully captured by knowing the
quantum-mechanical wavefunction, which, however, is difficult to access both
theoretically and experimentally. Here, we use femtosecond photoemission
orbital tomography in combination with many-body perturbation theory to gain
access to exciton wavefunctions in organic semiconductors. We find that the
coherent sum of multiple electron-hole pair contributions that typically make
up a single exciton can be experimentally evidenced by photoelectron
spectroscopy. For the prototypical organic semiconductor buckminsterfullerene
(C$_{60}$), we show how to disentangle such multiorbital contributions and
thereby access key properties of the exciton wavefunctions including
localization, charge-transfer character, and ultrafast exciton formation and
relaxation dynamics. | http://arxiv.org/abs/2303.13904v1 |
The aim of this article is to describe the idea of Clairaut slant Riemannian
maps from Riemannian manifolds to K\"ahler manifolds. First, for the slant
Riemannian map, we obtain the necessary and sufficient conditions for a curve
to be a geodesic on the base manifold. Further, we find the necessary and
sufficient conditions for the slant Riemannian map to be a Clairaut slant
Riemannian map; for Clairaut slant Riemannian map to be totally geodesic; for
the base manifold to be a locally product manifold. Further, we obtain the
necessary and sufficient condition for the integrability of range of derivative
map. Also, we investigate the harmonicity of Clairaut slant Riemannian map.
Finally, we get two inequalities in terms of second fundamental form of a
Clairaut slant Riemannian map and check the equality case. | http://arxiv.org/abs/2306.08244v1 |
The Cichorium genus offers a unique opportunity to study the sporophytic self
incompatibility (SSI) system, being composed of species characterized by highly
efficient SI (C. intybus) and complete self compatibility (C. endivia). The
chicory genome was used to map 7 previously identified SSI locus-associated
markers. The region containing the S locus was restricted to an 4 M bp window
on chromosome 5. Among the genes predicted in this region, MDIS1 INTERACTING
RECEPTOR LIKE KINASE 2 (MIK2) was promising as a candidate for SSI. Its
ortholog in Arabidopsis is involved in pollen stigma recognition reactions, and
its protein structure is similar to that of S-receptor kinase (SRK), a key
component of the SSI in the Brassica genus. The sequencing of MIK2 in chicory
and endive accessions revealed two contrasting scenarios. In C. endivia, MIK2
was fully conserved even comparing different botanical varieties (smooth and
curly). In C. intybus, 387 SNPs and 3 INDELs were identified when comparing
accessions of different biotypes from the same botanical variety (radicchio).
The SNP distribution throughout the gene was uneven, with hypervariable domains
preferentially localized in the LRR-rich extracellular region, putatively
identified as the receptor domain. The gene was hypothesized to be under
positive selection, as the nonsynonymous mutations were more than double the
synonymous ones (dN / dS = 2.17). An analogous situation was observed analyzing
the first 500 bp of the MIK2 promoter: no SNPs were observed among the endive
samples, whereas 44 SNPs and 6 INDELs were detected among the chicory samples.
Further analyses are needed to confirm the role of MIK2 in SSI and to
demonstrate whether the 23 species-specific nonsynonymous SNPs in the CDS
and/or the species-specific 10 bp INDEL found in a CCAAT box region of the
promoter are responsible for the contrasting sexual behaviors of the two
species. | http://arxiv.org/abs/2304.06410v1 |
We present high-resolution VLT/UVES spectroscopy and a detailed analysis of
the unique Broad Absorption-Line system towards the quasar SDSS
J165252.67+265001.96. This system exhibits low-ionization metal absorption
lines from the ground states and excited energy levels of Fe II and Mn II, and
the meta-stable 2^3S excited state of He I. The extended kinematics of the
absorber encompasses three main clumps with velocity offsets of -5680, -4550,
and -1770 km s$^{-1}$ from the quasar emission redshift, $z=0.3509\pm0.0003$,
derived from [O II] emission. Each clump shows moderate partial covering of the
background continuum source, $C_f \approx [0.53; 0.24; 0.81]$. We discuss the
excitation mechanisms at play in the gas, which we use to constrain the
distance of the clouds from the Active Galactic Nucleus (AGN) as well as the
density, temperature, and typical sizes of the clouds. The number density is
found to be $n_{\rm H} \sim 10^4\rm cm^{-3}$ and the temperature $T_e \sim
10^4\rm\,K$, with longitudinal cloudlet sizes of $\gtrsim0.01$ pc. Cloudy
photo-ionization modelling of He I$^{*}$, which is also produced at the
interface between the neutral and ionized phases, assuming the number densities
derived from Fe II, constrains the ionization parameter to be $\log U \sim -3$.
This corresponds to distances of a few 100 pc from the AGN. We discuss these
results in the more general context of associated absorption-line systems and
propose a connection between FeLoBALs and the recently-identified
molecular-rich intrinsic absorbers. Studies of significant samples of FeLoBALs,
even though rare per se, will soon be possible thanks to large dedicated
surveys paired with high-resolution spectroscopic follow-ups. | http://arxiv.org/abs/2307.09273v2 |
By integrating complementary information from RGB image and depth map, the
ability of salient object detection (SOD) for complex and challenging scenes
can be improved. In recent years, the important role of Convolutional Neural
Networks (CNNs) in feature extraction and cross-modality interaction has been
fully explored, but it is still insufficient in modeling global long-range
dependencies of self-modality and cross-modality. To this end, we introduce
CNNs-assisted Transformer architecture and propose a novel RGB-D SOD network
with Point-aware Interaction and CNN-induced Refinement (PICR-Net). On the one
hand, considering the prior correlation between RGB modality and depth
modality, an attention-triggered cross-modality point-aware interaction (CmPI)
module is designed to explore the feature interaction of different modalities
with positional constraints. On the other hand, in order to alleviate the block
effect and detail destruction problems brought by the Transformer naturally, we
design a CNN-induced refinement (CNNR) unit for content refinement and
supplementation. Extensive experiments on five RGB-D SOD datasets show that the
proposed network achieves competitive results in both quantitative and
qualitative comparisons. | http://arxiv.org/abs/2308.08930v1 |
BaAgAs is a ternary Dirac semimetal which can be tuned across a number of
topological orders. In this study we have investigated the bulk physical
properties of BaAgAs using density functional theory based computations. Most
of the results presented in this work are novel. The optimized structural
parameters are in good agreement with previous results. The elastic constants
indicate that BaAgAs is mechanically stable and brittle in nature. The compound
is moderately hard and possesses fair degree of machinability. There is
significant mechanical/elastic anisotropy in BaAgAs. The Debye temperature of
the compound is medium and the phonon thermal conductivity and melting
temperature are moderate as well. The bonding character is mixed with notable
covalent contribution. The electronic band structure calculations reveal clear
semimetallic behavior with a Dirac node at the Fermi level. BaAgAs has a small
ellipsoidal Fermi surface centered at the G-point of the Brillouin zone. The
phonon dispersion curves show dynamical stability. There is a clear phonon band
gap between the acoustic and the optical branches. The energy dependent optical
constants conform to the band structure calculations. The compound is an
efficient absorber of the ultraviolet light and has potential to be used as an
anti-reflection coating. Optical anisotropy of BaAgAs is moderate. The computed
repulsive Coulomb pseudopotential is low indicating that the electronic
correlations in this compound are not strong. | http://arxiv.org/abs/2305.07427v1 |
In the last decades, the capacity to generate large amounts of data in
science and engineering applications has been growing steadily. Meanwhile,
machine learning has progressed to become a suitable tool to process and
utilise the available data. Nonetheless, many relevant scientific and
engineering problems present challenges where current machine learning methods
cannot yet efficiently leverage the available data and resources. For example,
in scientific discovery, we are often faced with the problem of exploring very
large, structured and high-dimensional spaces. Moreover, the high fidelity,
black-box objective function is often very expensive to evaluate. Progress in
machine learning methods that can efficiently tackle such challenges would help
accelerate currently crucial areas such as drug and materials discovery. In
this paper, we propose a multi-fidelity active learning algorithm with
GFlowNets as a sampler, to efficiently discover diverse, high-scoring
candidates where multiple approximations of the black-box function are
available at lower fidelity and cost. Our evaluation on molecular discovery
tasks shows that multi-fidelity active learning with GFlowNets can discover
high-scoring candidates at a fraction of the budget of its single-fidelity
counterpart while maintaining diversity, unlike RL-based alternatives. These
results open new avenues for multi-fidelity active learning to accelerate
scientific discovery and engineering design. | http://arxiv.org/abs/2306.11715v2 |
Recently, deception detection on human videos is an eye-catching techniques
and can serve lots applications. AI model in this domain demonstrates the high
accuracy, but AI tends to be a non-interpretable black box. We introduce an
attention-aware neural network addressing challenges inherent in video data and
deception dynamics. This model, through its continuous assessment of visual,
audio, and text features, pinpoints deceptive cues. We employ a multimodal
fusion strategy that enhances accuracy; our approach yields a 92\% accuracy
rate on a real-life trial dataset. Most important of all, the model indicates
the attention focus in the videos, providing valuable insights on deception
cues. Hence, our method adeptly detects deceit and elucidates the underlying
process. We further enriched our study with an experiment involving students
answering questions either truthfully or deceitfully, resulting in a new
dataset of 309 video clips, named ATSFace. Using this, we also introduced a
calibration method, which is inspired by Low-Rank Adaptation (LoRA), to refine
individual-based deception detection accuracy. | http://arxiv.org/abs/2309.01383v1 |
A near-field secure transmission framework is proposed. Employing the hybrid
beamforming architecture, a multi-antenna base station (BS) transmits
confidential information to a multi-antenna legitimate user (U) against a
multi-antenna eavesdropper (E) in the near field. A two-stage algorithm is
proposed to maximize the near-field secrecy capacity. Based on the
fully-digital beamformers obtained in the first stage, the optimal analog
beamformers and baseband digital beamformers can be alternatingly derived in
the closed-form expressions in the second stage. Numerical results demonstrate
that in contrast to the far-field secure communication relying on the angular
disparity, the near-field secure communication mainly relies on the distance
disparity between U and E. | http://arxiv.org/abs/2302.04189v3 |
Effectively localizing an agent in a realistic, noisy setting is crucial for
many embodied vision tasks. Visual Odometry (VO) is a practical substitute for
unreliable GPS and compass sensors, especially in indoor environments. While
SLAM-based methods show a solid performance without large data requirements,
they are less flexible and robust w.r.t. to noise and changes in the sensor
suite compared to learning-based approaches. Recent deep VO models, however,
limit themselves to a fixed set of input modalities, e.g., RGB and depth, while
training on millions of samples. When sensors fail, sensor suites change, or
modalities are intentionally looped out due to available resources, e.g., power
consumption, the models fail catastrophically. Furthermore, training these
models from scratch is even more expensive without simulator access or suitable
existing models that can be fine-tuned. While such scenarios get mostly ignored
in simulation, they commonly hinder a model's reusability in real-world
applications. We propose a Transformer-based modality-invariant VO approach
that can deal with diverse or changing sensor suites of navigation agents. Our
model outperforms previous methods while training on only a fraction of the
data. We hope this method opens the door to a broader range of real-world
applications that can benefit from flexible and learned VO models. | http://arxiv.org/abs/2305.00348v1 |
DNA self-assembly is an important tool that has a wide range of applications
such as building nanostructures, the transport of target virotherapies, and
nano-circuitry. Tools from graph theory can be used to encode the biological
process of DNA self-assembly. The principle component of this process is to
examine collections of branched junction molecules, called pots, and study the
types of structures that can be constructed. We restrict our attention to pots
which contain one set of complementary cohesive-ends, i.e. a single bond-edge
type, and we identify the types and sizes of structures that can be built from
such a pot. In particular, we show a dependence between the order of graphs in
the output of the pot and the number of arms on the corresponding tiles.
Furthermore, we provide two algorithms which will construct complete complexes
for a pot with a single bond-edge type. | http://arxiv.org/abs/2310.04398v1 |
Self-supervised techniques for learning speech representations have been
shown to develop linguistic competence from exposure to speech without the need
for human labels. In order to fully realize the potential of these approaches
and further our understanding of how infants learn language, simulations must
closely emulate real-life situations by training on developmentally plausible
corpora and benchmarking against appropriate test sets. To this end, we propose
a language-acquisition-friendly benchmark to probe spoken language models at
the lexical and syntactic levels, both of which are compatible with the
vocabulary typical of children's language experiences. This paper introduces
the benchmark and summarizes a range of experiments showing its usefulness. In
addition, we highlight two exciting challenges that need to be addressed for
further progress: bridging the gap between text and speech and between clean
speech and in-the-wild speech. | http://arxiv.org/abs/2306.01506v2 |
We study d=4, $N\geq 5$ supergravities and their deformation via candidate
counterterms, with the purpose to absorb UV divergences. We generalize the
earlier studies of deformation and twisted self-duality constraint to the case
with unbroken local H-symmetry in presence of fermions. We find that the
deformed action breaks nonlinear local supersymmetry. We show that all known
cases of enhanced UV divergence cancellations are explained by nonlinear local
supersymmetry.
This result implies, in particular, that if N=5 supergravity at five loop
will turn out to be UV divergent, the deformed theory will be BRST
inconsistent. If it will be finite, it will be a consequence of nonlinear local
supersymmetry and E7-type duality. | http://arxiv.org/abs/2304.10514v1 |
Large Language Models (LLMs) have demonstrated exceptional proficiency in
instruction-following, becoming increasingly crucial across various
applications. However, this capability brings with it the risk of prompt
injection attacks, where attackers inject instructions into LLMs' input to
elicit undesirable actions or content. Understanding the robustness of LLMs
against such attacks is vital for their safe implementation. In this work, we
establish a benchmark to evaluate the robustness of instruction-following LLMs
against prompt injection attacks. Our objective is to determine the extent to
which LLMs can be influenced by injected instructions and their ability to
differentiate between these injected and original target instructions. Through
extensive experiments with leading instruction-following LLMs, we uncover
significant vulnerabilities in their robustness to such attacks. Our results
indicate that some models are overly tuned to follow any embedded instructions
in the prompt, overly focusing on the latter parts of the prompt without fully
grasping the entire context. By contrast, models with a better grasp of the
context and instruction-following capabilities will potentially be more
susceptible to compromise by injected instructions. This underscores the need
to shift the focus from merely enhancing LLMs' instruction-following
capabilities to improving their overall comprehension of prompts and
discernment of instructions that are appropriate to follow. We hope our
in-depth analysis offers insights into the underlying causes of these
vulnerabilities, aiding in the development of future solutions. Code and data
are available at
https://github.com/Leezekun/instruction-following-robustness-eval | http://arxiv.org/abs/2308.10819v3 |
The increasing complexity of AI systems has led to the growth of the field of
Explainable Artificial Intelligence (XAI), which aims to provide explanations
and justifications for the outputs of AI algorithms. While there is
considerable demand for XAI, there remains a scarcity of studies aimed at
comprehensively understanding the practical distinctions among different
methods and effectively aligning each method with users individual needs, and
ideally, offer a mapping function which can map each user with its specific
needs to a method of explainability. This study endeavors to bridge this gap by
conducting a thorough review of extant research in XAI, with a specific focus
on Explainable Machine Learning (XML), and a keen eye on user needs. Our main
objective is to offer a classification of XAI methods within the realm of XML,
categorizing current works into three distinct domains: philosophy, theory, and
practice, and providing a critical review for each category. Moreover, our
study seeks to facilitate the connection between XAI users and the most
suitable methods for them and tailor explanations to meet their specific needs
by proposing a mapping function that take to account users and their desired
properties and suggest an XAI method to them. This entails an examination of
prevalent XAI approaches and an evaluation of their properties. The primary
outcome of this study is the formulation of a clear and concise strategy for
selecting the optimal XAI method to achieve a given goal, all while delivering
personalized explanations tailored to individual users. | http://arxiv.org/abs/2302.03180v2 |
We detail a quantum circuit capable of efficiently encoding analytical
approximations to gravitational wave signal waveforms of compact binary
coalescences into the amplitudes of quantum bits using both quantum arithmetic
operations and hybrid classical-quantum generative modelling. The gate cost of
the proposed method is considered and compared to a state preparation routine
for arbitrary amplitudes, where we demonstrate up to a four orders of magnitude
reduction in gate cost when considering the encoding of gravitational waveforms
representative of binary neutron star inspirals detectable to the Einstein
telescope. We demonstrate through a quantum simulation, that is limited to 28
qubits, the encoding of a second post-Newtonian inspiral waveform with a
fidelity compared to the desired state of 0.995 when using the Grover-Rudolph
algorithm, or 0.979 when using a trained quantum generative adversarial network
with a significant reduction of required gates. | http://arxiv.org/abs/2306.11073v1 |
Given a group $\Gamma,$ its Bohr compactification
$\operatorname{Bohr}(\Gamma)$ and its profinite completion
$\operatorname{Prof}(\Gamma)$ are compact groups naturally associated to
$\Gamma$; moreover, $\operatorname{Prof}(\Gamma)$ can be identified with the
quotient of $\operatorname{Bohr}(\Gamma)$ by its connected component
$\operatorname{Bohr}(\Gamma)_0.$ We study the structure of
$\operatorname{Bohr}(\Gamma)$ for an arithmetic subgroup $\Gamma$ of an
algebraic group $G$ over $\mathbf{Q}$. When $G$ is unipotent, we show that
$\operatorname{Bohr}(\Gamma)$ can be identified with the direct product
$\operatorname{Bohr}(\Gamma^{\rm Ab})_0\times \operatorname{Prof}(\Gamma)$,
where $\Gamma^{\rm Ab}= \Gamma/[\Gamma, \Gamma]$ is the abelianization of
$\Gamma.$ In the general case, using a Levi decomposition $G= U\rtimes H$
(where $U$ is unipotent and $H$ is reductive), we show that
$\operatorname{Bohr}(\Gamma)$ can be described as the semi-direct product of a
certain quotient of $\operatorname{Bohr}(\Gamma\cap U)$ with
$\operatorname{Bohr}(\Gamma \cap H)$. When $G$ is simple and has higher
$\mathbf{R}$-rank, $\operatorname{Bohr}(\Gamma)$ is isomorphic, up to a finite
group, to the product $K\times \operatorname{Prof}(\Gamma),$ where $K$ is the
maximal compact factor of the real Lie group $G(\mathbf{R}).$ | http://arxiv.org/abs/2304.09045v1 |
We present novel results related to isomorphic resonance graphs of
2-connected outerplane bipartite graphs. As the main result, we provide a
structure characterization for 2-connected outerplane bipartite graphs with
isomorphic resonance graphs. Moreover, two additional characterizations are
expressed in terms of resonance digraphs and via local structures of inner
duals of 2-connected outerplane bipartite graphs, respectively. | http://arxiv.org/abs/2306.07611v1 |
Robustness in Simultaneous Localization and Mapping (SLAM) remains one of the
key challenges for the real-world deployment of autonomous systems. SLAM
research has seen significant progress in the last two and a half decades, yet
many state-of-the-art (SOTA) algorithms still struggle to perform reliably in
real-world environments. There is a general consensus in the research community
that we need challenging real-world scenarios which bring out different failure
modes in sensing modalities. In this paper, we present a novel multi-modal
indoor SLAM dataset covering challenging common scenarios that a robot will
encounter and should be robust to. Our data was collected with a mobile
robotics platform across multiple floors at Northeastern University's ISEC
building. Such a multi-floor sequence is typical of commercial office spaces
characterized by symmetry across floors and, thus, is prone to perceptual
aliasing due to similar floor layouts. The sensor suite comprises seven global
shutter cameras, a high-grade MEMS inertial measurement unit (IMU), a ZED
stereo camera, and a 128-channel high-resolution lidar. Along with the dataset,
we benchmark several SLAM algorithms and highlight the problems faced during
the runs, such as perceptual aliasing, visual degradation, and trajectory
drift. The benchmarking results indicate that parts of the dataset work well
with some algorithms, while other data sections are challenging for even the
best SOTA algorithms. The dataset is available at
https://github.com/neufieldrobotics/NUFR-M3F. | http://arxiv.org/abs/2306.08522v1 |
The main objective of this paper is to derive the optimality conditions for
one type of fuzzy optimization problems. At the beginning, we define a cone of
descent direction for fuzzy optimization, and prove that its intersection with
the cone of feasible directions at an optimal point is an empty set. Then, we
present first-order optimality conditions for fuzzy optimization problems.
Furthermore, we generalize the Gordan's theorem for fuzzy linear inequality
systems and utilize it to deduce the Fritz-John optimality condition for the
fuzzy optimization with inequality constraints. Finally, we apply the
optimality conditions established in this paper to a binary classification
problem for support vector machines with fuzzy data. In the meantime, numerical
examples are described to demonstrate the primary findings proposed in the
present paper. | http://arxiv.org/abs/2308.01914v1 |
As research interests in medical image analysis become increasingly
fine-grained, the cost for extensive annotation also rises. One feasible way to
reduce the cost is to annotate with coarse-grained superclass labels while
using limited fine-grained annotations as a complement. In this way,
fine-grained data learning is assisted by ample coarse annotations. Recent
studies in classification tasks have adopted this method to achieve
satisfactory results. However, there is a lack of research on efficient
learning of fine-grained subclasses in semantic segmentation tasks. In this
paper, we propose a novel approach that leverages the hierarchical structure of
categories to design network architecture. Meanwhile, a task-driven data
generation method is presented to make it easier for the network to recognize
different subclass categories. Specifically, we introduce a Prior Concatenation
module that enhances confidence in subclass segmentation by concatenating
predicted logits from the superclass classifier, a Separate Normalization
module that stretches the intra-class distance within the same superclass to
facilitate subclass segmentation, and a HierarchicalMix model that generates
high-quality pseudo labels for unlabeled samples by fusing only similar
superclass regions from labeled and unlabeled images. Our experiments on the
BraTS2021 and ACDC datasets demonstrate that our approach achieves comparable
accuracy to a model trained with full subclass annotations, with limited
subclass annotations and sufficient superclass annotations. Our approach offers
a promising solution for efficient fine-grained subclass segmentation in
medical images. Our code is publicly available here. | http://arxiv.org/abs/2307.00257v1 |
Pretrained language models have improved zero-shot text classification by
allowing the transfer of semantic knowledge from the training data in order to
classify among specific label sets in downstream tasks. We propose a simple way
to further improve zero-shot accuracies with minimal effort. We curate small
finetuning datasets intended to describe the labels for a task. Unlike typical
finetuning data, which has texts annotated with labels, our data simply
describes the labels in language, e.g., using a few related terms,
dictionary/encyclopedia entries, and short templates. Across a range of topic
and sentiment datasets, our method is more accurate than zero-shot by 17-19%
absolute. It is also more robust to choices required for zero-shot
classification, such as patterns for prompting the model to classify and
mappings from labels to tokens in the model's vocabulary. Furthermore, since
our data merely describes the labels but does not use input texts, finetuning
on it yields a model that performs strongly on multiple text domains for a
given label set, even improving over few-shot out-of-domain classification in
multiple settings. | http://arxiv.org/abs/2305.02239v2 |
The task of radiology reporting comprises describing and interpreting the
medical findings in radiographic images, including description of their
location and appearance. Automated approaches to radiology reporting require
the image to be encoded into a suitable token representation for input to the
language model. Previous methods commonly use convolutional neural networks to
encode an image into a series of image-level feature map representations.
However, the generated reports often exhibit realistic style but imperfect
accuracy. Inspired by recent works for image captioning in the general domain
in which each visual token corresponds to an object detected in an image, we
investigate whether using local tokens corresponding to anatomical structures
can improve the quality of the generated reports. We introduce a novel
adaptation of Faster R-CNN in which finding detection is performed for the
candidate bounding boxes extracted during anatomical structure localisation. We
use the resulting bounding box feature representations as our set of
finding-aware anatomical tokens. This encourages the extracted anatomical
tokens to be informative about the findings they contain (required for the
final task of radiology reporting). Evaluating on the MIMIC-CXR dataset of
chest X-Ray images, we show that task-aware anatomical tokens give
state-of-the-art performance when integrated into an automated reporting
pipeline, yielding generated reports with improved clinical accuracy. | http://arxiv.org/abs/2308.15961v1 |
We have examined inclusive $\mu^+\mu^- \rightarrow \mu^+ \mu^- +
E_{\mathrm{miss}}$ and annihilation $\mu^+\mu^- \rightarrow \mu^+ \mu^-$
processes at future high energy muon colliders in the framework of the
Randall-Sundrum-like model with a small curvature of space-time. The collision
energies of 3 TeV, 14 TeV and, 100 TeV are addressed. Both differential and
total cross sections are calculated, and exclusion bounds on a 5-dimensional
gravity scale are obtained depending on collision energy and integrated
luminosity of the muon colliders. | http://arxiv.org/abs/2301.08585v3 |
We study the behavior of a hadronic matter in the presence of an external
magnetic field within the van der Waals hadron resonance gas model, considering
both attractive and repulsive interactions among the hadrons. Various
thermodynamic quantities like pressure ($P$), energy density ($\varepsilon$),
magnetization ($\mathcal{M}$), entropy density ($s$), squared speed of sound
($c_{\rm s}^{2}$), and specific-heat capacity at constant volume ($c_{v}$) are
calculated as functions of temperature ($T$) and static finite magnetic field
($eB$). We also consider the effect of baryochemical potential ($\mu_{B}$) on
the above-mentioned thermodynamic observables in the presence of a magnetic
field. Further, we estimate the magnetic susceptibility ($\chi_{\rm M}^{2}$),
relative permeability ($\mu_{\rm r}$), and electrical susceptibility
($\chi_{\rm Q}^{2}$) which can help us to understand the system better. Through
this model, we quantify a liquid-gas phase transition in the T-eB-$\mu_B$ phase
space. | http://arxiv.org/abs/2306.03477v2 |
Text generation models are notoriously vulnerable to errors in the training
data. With the wide-spread availability of massive amounts of web-crawled data
becoming more commonplace, how can we enhance the robustness of models trained
on a massive amount of noisy web-crawled text? In our work, we propose Error
Norm Truncation (ENT), a robust enhancement method to the standard training
objective that truncates noisy data. Compared to methods that only uses the
negative log-likelihood loss to estimate data quality, our method provides a
more accurate estimation by considering the distribution of non-target tokens,
which is often overlooked by previous work. Through comprehensive experiments
across language modeling, machine translation, and text summarization, we show
that equipping text generation models with ENT improves generation quality over
standard training and previous soft and hard truncation methods. Furthermore,
we show that our method improves the robustness of models against two of the
most detrimental types of noise in machine translation, resulting in an
increase of more than 2 BLEU points over the MLE baseline when up to 50% of
noise is added to the data. | http://arxiv.org/abs/2310.00840v2 |
We aim to leverage the interactions between users and items in the Steam
community to build a game recommendation system that makes personalized
suggestions to players in order to boost Steam's revenue as well as improve the
users' gaming experience. The whole project is built on Apache Spark and deals
with Big Data. The final output of the project is a recommendation system that
gives a list of the top 5 items that the users will possibly like.6 | http://arxiv.org/abs/2305.04890v1 |
Explaining the decisions made by machine learning models for high-stakes
applications is critical for increasing transparency and guiding improvements
to these decisions. This is particularly true in the case of models for graphs,
where decisions often depend on complex patterns combining rich structural and
attribute data. While recent work has focused on designing so-called post-hoc
explainers, the broader question of what constitutes a good explanation remains
open. One intuitive property is that explanations should be sufficiently
informative to reproduce the predictions given the data. In other words, a good
explainer can be repurposed as a predictor. Post-hoc explainers do not achieve
this goal as their explanations are highly dependent on fixed model parameters
(e.g., learned GNN weights). To address this challenge, we propose RAGE (Robust
Ante-hoc Graph Explainer), a novel and flexible ante-hoc explainer designed to
discover explanations for graph neural networks using bilevel optimization,
with a focus on the chemical domain. RAGE can effectively identify molecular
substructures that contain the full information needed for prediction while
enabling users to rank these explanations in terms of relevance. Our
experiments on various molecular classification tasks show that RAGE
explanations are better than existing post-hoc and ante-hoc approaches. | http://arxiv.org/abs/2305.15745v2 |
Using electronic structure calculations based on density functional theory,
we predict and study the structural, mechanical, electronic, magnetic and
transport properties of a new full Heusler chalcogenide, namely, Fe$_2$CrTe,
both in bulk and heterostructure form. The system shows a ferromagnetic and
half-metallic(HM) like behavior, with a very high (about 95%) spin polarization
at the Fermi level, in its cubic phase. Interestingly, under tetragonal
distortion, a clear minimum (with almost the same energy as the cubic phase)
has also been found, at a c/a value of 1.26, which, however, shows a
ferrimagnetic and fully metallic nature. The compound has been found to be
dynamically stable in both the phases against the lattice vibration. The
elastic properties indicate that the compound is mechanically stable in both
the phases, following the stability criteria of the cubic and tetragonal
phases. The elastic parameters unveil the mechanically anisotropic and ductile
nature of the alloy system. Due to the HM-like behavior of the cubic phase and
keeping in mind the practical aspects, we probe the effect of strain as well as
substrate on various physical properties of this alloy. Transmission profile of
the Fe$_2$CrTe/MgO/Fe$_2$CrTe heterojunction has been calculated to probe it as
a magnetic tunneling junction (MTJ) material in both the cubic and tetragonal
phases. Considerably large tunneling magnetoresistance ratio (TMR) of 1000% is
observed for the tetragonal phase, which is found to be one order of magnitude
larger than that of the cubic phase. | http://arxiv.org/abs/2301.09843v1 |
We analyze a game-theoretic abstraction of epidemic containment played on an
undirected graph $G$: each player is associated with a node in $G$ and can
either acquire protection from a contagious process or risk infection. After
decisions are made, an infection starts at a random node $v$ and propagates
through all unprotected nodes reachable from $v$. It is known that the price of
anarchy (PoA) in $n$-node graphs can be as large as $\Theta(n)$. Our main
result is a tight bound of order $\sqrt{n\Delta}$ on the PoA, where $\Delta$ is
the maximum degree of the graph. We also study additional factors that can
reduce the PoA, such as higher thresholds for contagion and varying the costs
of becoming infected vs. acquiring protection. | http://arxiv.org/abs/2304.12303v1 |
We study the dissociation effect of $J/\Psi$ in magnetized, rotating QGP
matter at finite temperature and chemical potential using gauge/gravity
duality. By incorporating angular velocity into the holographic magnetic
catalysis model, we analyze the influence of temperature, chemical potential,
magnetic field, and angular velocity on the properties of $J/\Psi$ meson. The
results reveal that temperature, chemical potential, and rotation enhance the
dissociation effect and increase the effective mass in the QGP phase. However,
the magnetic field suppresses dissociation, and its effect on the effective
mass is non-trivial. Additionally, we explore the interplay between magnetic
field and rotation, identifying a critical angular velocity that determines the
dominant effect. As a parallel study, we also examine the rotation effect in
the holographic inverse magnetic catalysis model, although the magnetic field
exhibits distinctly different behaviors in these two models, the impact of
rotation on the dissociation effect of $J/\Psi$ is similar. Finally, we
investigate the influence of electric field and demonstrate that it also speeds
up the $J/\Psi$ dissociation. | http://arxiv.org/abs/2306.04318v1 |
The Quantum Approximate Optimization Algorithm (QAOA) -- one of the leading
algorithms for applications on intermediate-scale quantum processors -- is
designed to provide approximate solutions to combinatorial optimization
problems with shallow quantum circuits. Here, we study QAOA implementations
with cat qubits, using coherent states with opposite amplitudes. The dominant
noise mechanism, i.e., photon losses, results in $Z$-biased noise with this
encoding. We consider in particular an implementation with Kerr resonators. We
numerically simulate solving MaxCut problems using QAOA with cat qubits by
simulating the required gates sequence acting on the Kerr non-linear
resonators, and compare to the case of standard qubits, encoded in ideal
two-level systems, in the presence of single-photon loss. Our results show that
running QAOA with cat qubits increases the approximation ratio for random
instances of MaxCut with respect to qubits encoded into two-level systems. | http://arxiv.org/abs/2305.05556v2 |
We resolve the debate over the existence and magnitude of cross-sublattice
(CS) contributions to spin pumping and spin-transfer torques in a
two-sublattice antiferromagnet connected to a non-magnetic metal. Guided by
symmetry considerations, we first relate the controversial CS terms to specific
components in the spin conductance matrix. Then we quantify these components by
studying the spin-dependent electron scattering on a fully compensated
interface. We ascertain the absence of all CS contributions in the collinear
regime. Even in the non-collinear regime, the CS contributions only constitute
a higher-order correction to the existing theory. | http://arxiv.org/abs/2305.13334v2 |
We construct a Leray-Serre spectral sequence for fibrations for de Rham
cohomology on noncommutative algebras. The fibrations are bimodules with
zero-curvature extendable bimodule connections satisfying an additional
condition. By the KSGNS construction, completely positive maps between
C*-algebras correspond to Hilbert C*-bimodules. We give examples of fibrations
on group algebras and matrix algebras. | http://arxiv.org/abs/2302.00489v1 |
Public knowledge of what is said in parliament is a tenet of democracy, and a
critical resource for political science research. In Australia, following the
British tradition, the written record of what is said in parliament is known as
Hansard. While the Australian Hansard has always been publicly available, it
has been difficult to use for the purpose of large-scale macro- and micro-level
text analysis because it has only been available as PDFs or XMLs. Following the
lead of the Linked Parliamentary Data project which achieved this for Canada,
we provide a new, comprehensive, high-quality, rectangular database that
captures proceedings of the Australian parliamentary debates from 1998 to 2022.
The database is publicly available and can be linked to other datasets such as
election results. The creation and accessibility of this database enables the
exploration of new questions and serves as a valuable resource for both
researchers and policymakers. | http://arxiv.org/abs/2304.04561v2 |
We demonstrate that the spin wave Cherenkov effect can be used to design the
unidirectional spin wave emitter with tunable frequency and switchable
direction of emission. In our numerical studies, we propose to use a pair of
traveling profiles of the magnetic field which generate the spin waves, for
sufficiently large velocity of their motion. In the considered system, the spin
waves of shorter (longer) wavelengths are induced at the front (back) of the
moving profiles and interfere constructively or destructively, depending on the
velocity of the profiles. Moreover, we showed that the spin waves can be
confined between the pair of traveling profiles of the magnetic field. This
work opens the perspectives for the experimental studies in hybrid
magnonic-superconducting systems where the magnetic vortices in a
superconductor can be used as moving sources of the magnetic field driving the
spin waves in the ferromagnetic subsystem. | http://arxiv.org/abs/2307.12653v4 |
Curvature properties of a metric connection with totally skew-symmetric
torsion are investigated. It is shown that if either the 3-form $T$ is
harmonic, $dT=\delta T=0$ or the curvature of the torsion connection $R\in
S^2\Lambda^2$ then the scalar curvature of a $\nabla$-Einstein manifold is
determined by the norm of the torsion up to a constant. It is proved that a
compact generalized gradient Ricci soliton with closed torsion is Ricci flat if
and only if either the norm of the torsion or the Riemannian scalar curvature
are constants. In this case the torsion 3-form is harmonic and the gradient
function has to be constant.
Necessary and sufficient conditions a metric connection with skew torsion to
satisfy the Riemannian first Bianchi identity as well as the contracted
Riemannian second Binachi identity are presented. It is shown that if the
torsion connection satisfies the Riemannian first Bianchi identity then it
satisfies the contracted Riemannian second Bianchi identity. It is also proved
that a metric connection with skew torsion satisfying the curvature identity
$R(X,Y,Z,V)=R(Z,Y,X,V)$ must be flat. | http://arxiv.org/abs/2307.03986v5 |
Simulation-free methods for training continuous-time generative models
construct probability paths that go between noise distributions and individual
data samples. Recent works, such as Flow Matching, derived paths that are
optimal for each data sample. However, these algorithms rely on independent
data and noise samples, and do not exploit underlying structure in the data
distribution for constructing probability paths. We propose Multisample Flow
Matching, a more general framework that uses non-trivial couplings between data
and noise samples while satisfying the correct marginal constraints. At very
small overhead costs, this generalization allows us to (i) reduce gradient
variance during training, (ii) obtain straighter flows for the learned vector
field, which allows us to generate high-quality samples using fewer function
evaluations, and (iii) obtain transport maps with lower cost in high
dimensions, which has applications beyond generative modeling. Importantly, we
do so in a completely simulation-free manner with a simple minimization
objective. We show that our proposed methods improve sample consistency on
downsampled ImageNet data sets, and lead to better low-cost sample generation. | http://arxiv.org/abs/2304.14772v2 |
Public observation logic (POL) reasons about agent expectations and agent
observations in various real world situations. The expectations of agents take
shape based on certain protocols about the world around and they remove those
possible scenarios where their expectations and observations do not match. This
in turn influences the epistemic reasoning of these agents. In this work, we
study the computational complexity of the satisfaction problems of various
fragments of POL. In the process, we also highlight the inevitable link that
these fragments have with the well-studied Public announcement logic. | http://arxiv.org/abs/2306.02769v1 |
In this work, we study and evaluate the impact of a periodic spin-lattice
coupling in an Ising-like system on a 2D triangular lattice. Our proposed
simple Hamiltonian considers this additional interaction as an effect of
preferential phonon propagation direction augmented by the symmetry ofthe
underline lattice. The simplified analytical description of this new model
brought us consistent information about its ground state and thermal behavior,
and allowed us to highlight a singularity where the model behaves as several
decoupled one-dimensional Ising systems. A thorough analysis was obtained via
entropic simulations based in the Wang-Landau method that estimates the density
of states g(E) to explore the phase diagram and other thermodynamic properties
of interest. Also, we used the finite size scaling technique to characterize
the critical exponents and the nature of the phase transitions that, despite
the strong influence of the spin-lattice coupling, turned out to be within the
same universality class as the original 2D Ising model. | http://arxiv.org/abs/2305.03127v2 |
Avian prestin is sensitive to membrane thickness as much as mammalian
prestin, which undergoes conformational transitions in membrane area and
thereby drives length changes of the cylindrical cell body of outer hair cells.
The membrane thickness dependence of mammalian prestin stems from changes in
hydrophobic profile in conformational states, accompanied by changes in their
membrane area. Even though such area changes are not detected for avian
prestin, it nonetheless bends hair bundles of avian short hair cells. Here it
is suggested that the motile function of avian prestin can be based on
conformational transitions involving shearing deformation of the membrane
protein, which also leads to membrane thickness sensitivity. | http://arxiv.org/abs/2307.02440v1 |
There may exist extended configurations in the dark matter sector that are
analogues of structures in the visible sector. In this work, we explore
non-topological solitonic configurations, specifically Q-balls, and study when
they may form macroscopic astrophysical structures and what their distinct
characteristics might be. We study in some detail theoretical bounds on their
sizes and constraints on the underlying parameters, based on criteria for an
astrophysical Q-ball's existence, gravitational stability and viability of
solutions. Following this path, one is able to obtain novel limits on
astrophysical Q-ball sizes and their underlying parameters. We also explore the
gravitational lensing features of different astrophysical Q-ball profiles,
which are more general than the simple thin-wall limit. It is seen that the
magnification characteristics may be very distinct, depending on the actual
details of the solution, even for astrophysical Q-balls having the same size
and mass. Assuming that such astrophysical Q-balls may form a small component
of the dark matter in the universe, we place limits on this fraction from the
gravitational microlensing surveys EROS-2, OGLE-IV, HSC-Subaru and the proposed
future survey WFIRST. Exploring various astrophysical Q-ball profiles and
sizes, it is found that while for most intermediate masses that we consider,
the dark matter fraction comprising astrophysical Q-balls is at most
sub-percent, for other masses it may be significantly higher. | http://arxiv.org/abs/2302.11590v3 |
This demo paper presents UnScientify, an interactive system designed to
detect scientific uncertainty in scholarly full text. The system utilizes a
weakly supervised technique that employs a fine-grained annotation scheme to
identify verbally formulated uncertainty at the sentence level in scientific
texts. The pipeline for the system includes a combination of pattern matching,
complex sentence checking, and authorial reference checking. Our approach
automates labeling and annotation tasks for scientific uncertainty
identification, taking into account different types of scientific uncertainty,
that can serve various applications such as information retrieval, text mining,
and scholarly document processing. Additionally, UnScientify provides
interpretable results, aiding in the comprehension of identified instances of
scientific uncertainty in text. | http://arxiv.org/abs/2307.14236v1 |
Blind Estimation of Audio Effects (BE-AFX) aims at estimating the Audio
Effects (AFXs) applied to an original, unprocessed audio sample solely based on
the processed audio sample. To train such a system traditional approaches
optimize a loss between ground truth and estimated AFX parameters. This
involves knowing the exact implementation of the AFXs used for the process. In
this work, we propose an alternative solution that eliminates the requirement
for knowing this implementation. Instead, we introduce an auto-encoder
approach, which optimizes an audio quality metric. We explore, suggest, and
compare various implementations of commonly used mastering AFXs, using
differential signal processing or neural approximations. Our findings
demonstrate that our auto-encoder approach yields superior estimates of the
audio quality produced by a chain of AFXs, compared to the traditional
parameter-based approach, even if the latter provides a more accurate parameter
estimation. | http://arxiv.org/abs/2310.11781v2 |
The aim of this paper is to investigate the effect of a novel method called
linear law-based feature space transformation (LLT) on the accuracy of intraday
price movement prediction of cryptocurrencies. To do this, the 1-minute
interval price data of Bitcoin, Ethereum, Binance Coin, and Ripple between 1
January 2019 and 22 October 2022 were collected from the Binance cryptocurrency
exchange. Then, 14-hour nonoverlapping time windows were applied to sample the
price data. The classification was based on the first 12 hours, and the two
classes were determined based on whether the closing price rose or fell after
the next 2 hours. These price data were first transformed with the LLT, then
they were classified by traditional machine learning algorithms with 10-fold
cross-validation. Based on the results, LLT greatly increased the accuracy for
all cryptocurrencies, which emphasizes the potential of the LLT algorithm in
predicting price movements. | http://arxiv.org/abs/2305.04884v1 |
For an inverse coefficient problem of determining a state-varying factor in
the corresponding Hamiltonian for a mean field game system, we prove the global
Lipschitz stability by spatial data of one component and interior data in an
arbitrarily chosen subdomain over a time interval. The proof is based on
Carleman estimates with different norms. | http://arxiv.org/abs/2307.04025v1 |
The TREC Fair Ranking Track aims to provide a platform for participants to
develop and evaluate novel retrieval algorithms that can provide a fair
exposure to a mixture of demographics or attributes, such as ethnicity, that
are represented by relevant documents in response to a search query. For
example, particular demographics or attributes can be represented by the
documents' topical content or authors. The 2021 Fair Ranking Track adopted a
resource allocation task. The task focused on supporting Wikipedia editors who
are looking to improve the encyclopedia's coverage of topics under the purview
of a WikiProject. WikiProject coordinators and/or Wikipedia editors search for
Wikipedia documents that are in need of editing to improve the quality of the
article. The 2021 Fair Ranking track aimed to ensure that documents that are
about, or somehow represent, certain protected characteristics receive a fair
exposure to the Wikipedia editors, so that the documents have an fair
opportunity of being improved and, therefore, be well-represented in Wikipedia.
The under-representation of particular protected characteristics in Wikipedia
can result in systematic biases that can have a negative human, social, and
economic impact, particularly for disadvantaged or protected societal groups. | http://arxiv.org/abs/2302.10856v1 |
Bhatnagar-Gross-Krook (BGK) equation is a relaxation model of the Boltzmann
equation which is widely used in place of the Boltzmann equation for the
simulation of various kinetic flow problems. In this work, we study the
asymptotic stability of the BGK model when the initial data is not necessarily
close to the global equilibrium pointwisely. Due to the highly nonlinear
structure of the relaxation operator, the argument developed to derive the
bootstrap estimate for the Boltzmann equation leads to a weaker estimate in the
case of the BGK model, which does not exclude the possible blow-up of the
perturbation. To overcome this issue, we carry out a refined analysis of the
macroscopic fields to guarantee that the system transits from a highly
nonlinear regime into a quadratic nonlinear regime after a long but finite
time, in which the highly nonlinear perturbative term relaxes to essentially
quadratic nonlinearity. | http://arxiv.org/abs/2301.09857v2 |
Let $(S, \n)$ be a commutative noetherian local ring and $\omega\in\n$ be
non-zerodivisor. This paper deals with the behavior of the category
$\mon(\omega, \cp)$ consisting of all monomorphisms between finitely generated
projective $S$-modules with cokernels annihilated by $\omega$. We introduce a
homotopy category $\HT\mon(\omega, \cp)$, which is shown to be triangulated. It
is proved that this homotopy category embeds into the singularity category of
the factor ring $R=S/{(\omega)}$. As an application, not only the existence of
almost split sequences {ending at indecomposable non-projective objects of}
$\mon(\omega, \cp)$ is proven, but also the Auslander-Reiten translation,
$\tau_{\mon}(-)$, is completely recognized. Particularly, it will be observed
that any non-projective object of $\mon(\omega, \cp)$ with local endomorphism
ring is invariant under the square of the Auslander-Reiten translation. | http://arxiv.org/abs/2307.13559v1 |
We explore the task of embodied view synthesis from monocular videos of
deformable scenes. Given a minute-long RGBD video of people interacting with
their pets, we render the scene from novel camera trajectories derived from the
in-scene motion of actors: (1) egocentric cameras that simulate the point of
view of a target actor and (2) 3rd-person cameras that follow the actor.
Building such a system requires reconstructing the root-body and articulated
motion of every actor, as well as a scene representation that supports
free-viewpoint synthesis. Longer videos are more likely to capture the scene
from diverse viewpoints (which helps reconstruction) but are also more likely
to contain larger motions (which complicates reconstruction). To address these
challenges, we present Total-Recon, the first method to photorealistically
reconstruct deformable scenes from long monocular RGBD videos. Crucially, to
scale to long videos, our method hierarchically decomposes the scene into the
background and objects, whose motion is decomposed into carefully initialized
root-body motion and local articulations. To quantify such "in-the-wild"
reconstruction and view synthesis, we collect ground-truth data from a
specialized stereo RGBD capture rig for 11 challenging videos, significantly
outperforming prior methods. Our code, model, and data can be found at
https://andrewsonga.github.io/totalrecon . | http://arxiv.org/abs/2304.12317v2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.