text string | source string |
|---|---|
Transfer learning is a valuable tool in deep learning as it allows
propagating information from one "source dataset" to another "target dataset",
especially in the case of a small number of training examples in the latter.
Yet, discrepancies between the underlying distributions of the source and
target data are commonplace and are known to have a substantial impact on
algorithm performance. In this work we suggest novel information-theoretic
approaches for the analysis of the performance of deep neural networks in the
context of transfer learning. We focus on the task of semi-supervised transfer
learning, in which unlabeled samples from the target dataset are available
during network training on the source dataset. Our theory suggests that one may
improve the transferability of a deep neural network by incorporating
regularization terms on the target data based on information-theoretic
quantities, namely the Mutual Information and the Lautum Information. We
demonstrate the effectiveness of the proposed approaches in various
semi-supervised transfer learning experiments. | http://arxiv.org/abs/2306.06731v1 |
Learning deep discrete latent presentations offers a promise of better
symbolic and summarized abstractions that are more useful to subsequent
downstream tasks. Inspired by the seminal Vector Quantized Variational
Auto-Encoder (VQ-VAE), most of work in learning deep discrete representations
has mainly focused on improving the original VQ-VAE form and none of them has
studied learning deep discrete representations from the generative viewpoint.
In this work, we study learning deep discrete representations from the
generative viewpoint. Specifically, we endow discrete distributions over
sequences of codewords and learn a deterministic decoder that transports the
distribution over the sequences of codewords to the data distribution via
minimizing a WS distance between them. We develop further theories to connect
it with the clustering viewpoint of WS distance, allowing us to have a better
and more controllable clustering solution. Finally, we empirically evaluate our
method on several well-known benchmarks, where it achieves better qualitative
and quantitative performances than the other VQ-VAE variants in terms of the
codebook utilization and image reconstruction/generation. | http://arxiv.org/abs/2302.05917v2 |
We introduce PaLM 2, a new state-of-the-art language model that has better
multilingual and reasoning capabilities and is more compute-efficient than its
predecessor PaLM. PaLM 2 is a Transformer-based model trained using a mixture
of objectives. Through extensive evaluations on English and multilingual
language, and reasoning tasks, we demonstrate that PaLM 2 has significantly
improved quality on downstream tasks across different model sizes, while
simultaneously exhibiting faster and more efficient inference compared to PaLM.
This improved efficiency enables broader deployment while also allowing the
model to respond faster, for a more natural pace of interaction. PaLM 2
demonstrates robust reasoning capabilities exemplified by large improvements
over PaLM on BIG-Bench and other reasoning tasks. PaLM 2 exhibits stable
performance on a suite of responsible AI evaluations, and enables
inference-time control over toxicity without additional overhead or impact on
other capabilities. Overall, PaLM 2 achieves state-of-the-art performance
across a diverse set of tasks and capabilities.
When discussing the PaLM 2 family, it is important to distinguish between
pre-trained models (of various sizes), fine-tuned variants of these models, and
the user-facing products that use these models. In particular, user-facing
products typically include additional pre- and post-processing steps.
Additionally, the underlying models may evolve over time. Therefore, one should
not expect the performance of user-facing products to exactly match the results
reported in this report. | http://arxiv.org/abs/2305.10403v3 |
Object detection has seen remarkable progress in recent years with the
introduction of Convolutional Neural Networks (CNN). Object detection is a
multi-task learning problem where both the position of the objects in the
images as well as their classes needs to be correctly identified. The idea here
is to maximize the overlap between the ground-truth bounding boxes and the
predictions i.e. the Intersection over Union (IoU). In the scope of work seen
currently in this domain, IoU is approximated by using the Huber loss as a
proxy but this indirect method does not leverage the IoU information and treats
the bounding box as four independent, unrelated terms of regression. This is
not true for a bounding box where the four coordinates are highly correlated
and hold a semantic meaning when taken together. The direct optimization of the
IoU is not possible due to its non-convex and non-differentiable nature. In
this paper, we have formulated a novel loss namely, the Smooth IoU, which
directly optimizes the IoUs for the bounding boxes. This loss has been
evaluated on the Oxford IIIT Pets, Udacity self-driving car, PASCAL VOC, and
VWFS Car Damage datasets and has shown performance gains over the standard
Huber loss. | http://arxiv.org/abs/2304.07256v1 |
This study delves into the temporal dynamics within the equity market through
the lens of bond traders. Recognizing that the riskless interest rate
fluctuates over time, we leverage the Black-Derman-Toy model to trace its
temporal evolution. To gain insights from a bond trader's perspective, we focus
on a specific type of bond: the zero-coupon bond. This paper introduces a
pricing algorithm for this bond and presents a formula that can be used to
ascertain its real value. By crafting an equation that juxtaposes the
theoretical value of a zero-coupon bond with its actual value, we can deduce
the risk-neutral probability. It is noteworthy that the risk-neutral
probability correlates with variables like the instantaneous mean return,
instantaneous volatility, and inherent upturn probability in the equity market.
Examining these relationships enables us to discern the temporal shifts in
these parameters. Our findings suggest that the mean starts at a negative
value, eventually plateauing at a consistent level. The volatility, on the
other hand, initially has a minimal positive value, peaks swiftly, and then
stabilizes. Lastly, the upturn probability is initially significantly high,
plunges rapidly, and ultimately reaches equilibrium. | http://arxiv.org/abs/2306.16522v2 |
In this work, we present an end-to-end Knowledge Graph Question Answering
(KGQA) system named GETT-QA. GETT-QA uses T5, a popular text-to-text
pre-trained language model. The model takes a question in natural language as
input and produces a simpler form of the intended SPARQL query. In the simpler
form, the model does not directly produce entity and relation IDs. Instead, it
produces corresponding entity and relation labels. The labels are grounded to
KG entity and relation IDs in a subsequent step. To further improve the
results, we instruct the model to produce a truncated version of the KG
embedding for each entity. The truncated KG embedding enables a finer search
for disambiguation purposes. We find that T5 is able to learn the truncated KG
embeddings without any change of loss function, improving KGQA performance. As
a result, we report strong results for LC-QuAD 2.0 and SimpleQuestions-Wikidata
datasets on end-to-end KGQA over Wikidata. | http://arxiv.org/abs/2303.13284v3 |
This paper develops a novel minimal-state operational semantics for
higher-order functional languages that uses only the call stack and a source
program point or a lexical level as the complete state information: there is no
environment, no substitution, no continuation, etc. We prove this form of
operational semantics equivalent to standard presentations.
We then show how this approach can open the door to potential new
applications: we define a program analysis as a direct finitization of this
operational semantics. The program analysis that naturally emerges has a number
of novel and interesting properties compared to standard program analyses for
higher-order programs: for example, it can infer recurrences and does not need
value widening. We both give a formal definition of the analysis and describe
our current implementation. | http://arxiv.org/abs/2310.15915v2 |
Encoding long sequences in Natural Language Processing (NLP) is a challenging
problem. Though recent pretraining language models achieve satisfying
performances in many NLP tasks, they are still restricted by a pre-defined
maximum length, making them challenging to be extended to longer sequences. So
some recent works utilize hierarchies to model long sequences. However, most of
them apply sequential models for upper hierarchies, suffering from long
dependency issues. In this paper, we alleviate these issues through a
graph-based method. We first chunk the sequence with a fixed length to model
the sentence-level information. We then leverage graphs to model intra- and
cross-sentence correlations with a new attention mechanism. Additionally, due
to limited standard benchmarks for long document classification (LDC), we
propose a new challenging benchmark, totaling six datasets with up to 53k
samples and 4034 average tokens' length. Evaluation shows our model surpasses
competitive baselines by 2.6% in F1 score, and 4.8% on the longest sequence
dataset. Our method is shown to outperform hierarchical sequential models with
better performance and scalability, especially for longer sequences. | http://arxiv.org/abs/2305.03319v2 |
Ergonomic efficiency is essential to the mass and prolonged adoption of VR/AR
experiences. While VR/AR head-mounted displays unlock users' natural wide-range
head movements during viewing, their neck muscle comfort is inevitably
compromised by the added hardware weight. Unfortunately, little quantitative
knowledge for understanding and addressing such an issue is available so far.
Leveraging electromyography devices, we measure, model, and predict VR users'
neck muscle contraction levels (MCL) while they move their heads to interact
with the virtual environment. Specifically, by learning from collected
physiological data, we develop a bio-physically inspired computational model to
predict neck MCL under diverse head kinematic states. Beyond quantifying the
cumulative MCL of completed head movements, our model can also predict
potential MCL requirements with target head poses only. A series of objective
evaluations and user studies demonstrate its prediction accuracy and
generality, as well as its ability in reducing users' neck discomfort by
optimizing the layout of visual targets. We hope this research will motivate
new ergonomic-centered designs for VR/AR and interactive graphics applications.
Source code is released at:
https://github.com/NYU-ICL/xr-ergonomics-neck-comfort. | http://arxiv.org/abs/2308.14841v1 |
Identifying the infection status of each individual during infectious
diseases informs public health management. However, performing frequent
individual-level tests may not be feasible. Instead, sparse and sometimes
group-level tests are performed. Determining the infection status of
individuals using sparse group-level tests remains an open problem. We have
tackled this problem by extending graph-coupled hidden Markov models with
individuals infection statuses as the hidden states and the group test results
as the observations. We fitted the model to simulation datasets using the Gibbs
sampling method. The model performed about 0.55 AUC for low testing frequencies
and increased to 0.80 AUC in the case where the groups were tested every day.
The model was separately tested on a daily basis case to predict the statuses
over time and after 15 days of the beginning of the spread, which resulted in
0.98 AUC at day 16 and remained above 0.80 AUC until day 128. Therefore,
although dealing with sparse tests remains unsolved, the results open the
possibility of using initial group screenings during pandemics to accurately
estimate individuals infection statuses. | http://arxiv.org/abs/2306.02557v1 |
We study stochastic Cubic Newton methods for solving general possibly
non-convex minimization problems. We propose a new framework, which we call the
helper framework, that provides a unified view of the stochastic and
variance-reduced second-order algorithms equipped with global complexity
guarantees. It can also be applied to learning with auxiliary information. Our
helper framework offers the algorithm designer high flexibility for
constructing and analyzing the stochastic Cubic Newton methods, allowing
arbitrary size batches, and the use of noisy and possibly biased estimates of
the gradients and Hessians, incorporating both the variance reduction and the
lazy Hessian updates. We recover the best-known complexities for the stochastic
and variance-reduced Cubic Newton, under weak assumptions on the noise. A
direct consequence of our theory is the new lazy stochastic second-order
method, which significantly improves the arithmetic complexity for large
dimension problems. We also establish complexity bounds for the classes of
gradient-dominated objectives, that include convex and strongly convex
problems. For Auxiliary Learning, we show that using a helper (auxiliary
function) can outperform training alone if a given similarity measure is small. | http://arxiv.org/abs/2302.11962v4 |
We study the density fluctuations at equilibrium of the multi-species
stirring process, a natural multi-type generalization of the symmetric
(partial) exclusion process. In the diffusive scaling limit, the resulting
process is a system of infinite-dimensional Ornstein-Uhlenbeck processes that
are coupled in the noise terms. This shows that at the level of equilibrium
fluctuations the species start to interact, even though at the level of the
hydrodynamic limit each species diffuses separately. We consider also a
generalization to a multi-species stirring process with a linear reaction term
arising from species mutation. The general techniques used in the proof are
based on the Dynkin martingale approach, combined with duality for the
computation of the covariances. | http://arxiv.org/abs/2307.05111v2 |
The speech-to-singing (STS) voice conversion task aims to generate singing
samples corresponding to speech recordings while facing a major challenge: the
alignment between the target (singing) pitch contour and the source (speech)
content is difficult to learn in a text-free situation. This paper proposes
AlignSTS, an STS model based on explicit cross-modal alignment, which views
speech variance such as pitch and content as different modalities. Inspired by
the mechanism of how humans will sing the lyrics to the melody, AlignSTS: 1)
adopts a novel rhythm adaptor to predict the target rhythm representation to
bridge the modality gap between content and pitch, where the rhythm
representation is computed in a simple yet effective way and is quantized into
a discrete space; and 2) uses the predicted rhythm representation to re-align
the content based on cross-attention and conducts a cross-modal fusion for
re-synthesize. Extensive experiments show that AlignSTS achieves superior
performance in terms of both objective and subjective metrics. Audio samples
are available at https://alignsts.github.io. | http://arxiv.org/abs/2305.04476v4 |
We report the first near-infrared detection of Uranus's tiny moon Mab, the
presumed source of the blue and diffuse $\mu$ ring, using the NIRC2 instrument
at Keck Observatory. The detection was permitted by an updated shift-and-stack
procedure allowing us to integrate on Mab as it moved across the detector in 23
separate exposures taken over $\sim$2 hours, as well as the very low
(0.02$^{\circ}$) phase angle at the time of observation. At this phase angle,
Mab has an integrated I/F of 24 $\pm$ 3 km$^2$ at 1.6 $\mu$m and $\lesssim$37
km$^2$ at 2.1 $\mu$m. Comparing these values with Mab's visible reflectance as
derived by HST reveals that Mab is spectrally blue; its (0.5 $\mu$m)/(1.6
$\mu$m) color is more consistent with Miranda's value than Puck's value. Mab is
therefore more likely a $\sim$6-km radius body with a Miranda-like surface than
a 12-km radius body with a Puck-like surface, in agreement with prior work
based on infrared upper limits, but we caution that a Puck-like color is only
ruled out at the 2$\sigma$ level. We also report the first infrared photometry
of Perdita, finding an integrated I/F of 31 $\pm$ 3 km$^2$ at 1.6 $\mu$m. | http://arxiv.org/abs/2307.13773v1 |
The security context used in 5G authentication is generated during the
Authentication and Key Agreement (AKA) procedure and stored in both the user
equipment (UE) and the network sides for the subsequent fast registration
procedure. Given its importance, it is imperative to formally analyze the
security mechanism of the security context. The security context in the UE can
be stored in the Universal Subscriber Identity Module (USIM) card or in the
baseband chip. In this work, we present a comprehensive and formal verification
of the fast registration procedure based on the security context under the two
scenarios in ProVerif. Our analysis identifies two vulnerabilities, including
one that has not been reported before. Specifically, the security context
stored in the USIM card can be read illegally, and the validity checking
mechanism of the security context in the baseband chip can be bypassed.
Moreover, these vulnerabilities also apply to 4G networks. As a consequence, an
attacker can exploit these vulnerabilities to register to the network with the
victim's identity and then launch other attacks, including one-tap
authentication bypass leading to privacy disclosure, location spoofing, etc. To
ensure that these attacks are indeed realizable in practice, we have
responsibly confirmed them through experimentation in three operators. Our
analysis reveals that these vulnerabilities stem from design flaws of the
standard and unsafe practices by operators. We finally propose several
potential countermeasures to prevent these attacks. We have reported our
findings to the GSMA and received a coordinated vulnerability disclosure (CVD)
number CVD-2022-0057. | http://arxiv.org/abs/2303.10955v1 |
The interplay among differential geometry, statistical physics, and quantum
information science has been increasingly gaining theoretical interest in
recent years. In this paper, we present an explicit analysis of the Bures and
Sjoqvist metrics over the manifolds of thermal states for specific spin qubit
and the superconducting flux qubit Hamiltonian models. While the two metrics
equally reduce to the Fubini-Study metric in the asymptotic limiting case of
the inverse temperature approaching infinity for both Hamiltonian models, we
observe that the two metrics are generally different when departing from the
zero-temperature limit. In particular, we discuss this discrepancy in the case
of the superconducting flux Hamiltonian model. We conclude the two metrics
differ in the presence of a nonclassical behavior specified by the
noncommutativity of neighboring mixed quantum states. Such a noncommutativity,
in turn, is quantified by the two metrics in different manners. Finally, we
briefly discuss possible observable consequences of this discrepancy between
the two metrics when using them to predict critical and/or complex behavior of
physical systems of interest in quantum information science. | http://arxiv.org/abs/2303.01680v1 |
Modern face recognition (FR) models excel in constrained scenarios, but often
suffer from decreased performance when deployed in unconstrained (real-world)
environments due to uncertainties surrounding the quality of the captured
facial data. Face image quality assessment (FIQA) techniques aim to mitigate
these performance degradations by providing FR models with sample-quality
predictions that can be used to reject low-quality samples and reduce false
match errors. However, despite steady improvements, ensuring reliable quality
estimates across facial images with diverse characteristics remains
challenging. In this paper, we present a powerful new FIQA approach, named
DifFIQA, which relies on denoising diffusion probabilistic models (DDPM) and
ensures highly competitive results. The main idea behind the approach is to
utilize the forward and backward processes of DDPMs to perturb facial images
and quantify the impact of these perturbations on the corresponding image
embeddings for quality prediction. Because the diffusion-based perturbations
are computationally expensive, we also distill the knowledge encoded in DifFIQA
into a regression-based quality predictor, called DifFIQA(R), that balances
performance and execution time. We evaluate both models in comprehensive
experiments on 7 datasets, with 4 target FR models and against 10
state-of-the-art FIQA techniques with highly encouraging results. The source
code will be made publicly available. | http://arxiv.org/abs/2305.05768v1 |
The non-linear autoregressive (NLAR) model plays an important role in
modeling and predicting time series. One-step ahead prediction is
straightforward using the NLAR model, but the multi-step ahead prediction is
cumbersome. For instance, iterating the one-step ahead predictor is a
convenient strategy for linear autoregressive (LAR) models, but it is
suboptimal under NLAR. In this paper, we first propose a simulation and/or
bootstrap algorithm to construct optimal point predictors under an $L_1$ or
$L_2$ loss criterion. In addition, we construct bootstrap prediction intervals
in the multi-step ahead prediction problem; in particular, we develop an
asymptotically valid quantile prediction interval as well as a pertinent
prediction interval for future values. In order to correct the undercoverage of
prediction intervals with finite samples, we further employ predictive -- as
opposed to fitted -- residuals in the bootstrap process. Simulation studies are
also given to substantiate the finite sample performance of our methods. | http://arxiv.org/abs/2306.04126v1 |
In this paper we obtain several properties of translating solitons for a
general class of extrinsic geometric curvature flows given by a homogeneous,
symmetric, smooth non-negative function $\gamma$ defined in an open cone
$\Gamma\subset\mathbb{R}^n$. The main results are tangential principles,
nonexistence theorems for closed and entire solutions, and a uniqueness result
that says that any strictly convex $\gamma$-translator defined on a ball with a
single end $\mathcal{C}^2$-asymptotic to a cylinder is the ''bowl''-type
solution found in the translator paper of S. Rengaswami. | http://arxiv.org/abs/2306.03649v1 |
The Fibonacci numbers are the prototypical example of a recursive sequence,
but grow too quickly to enumerate sets of integer partitions. The same is true
for the other classical sequences $a(n)$ defined by Fibonacci-like recursions:
the tribonacci, Padovan, Pell, Narayana's cows, and Lucas sequences. For each
sequence $a(n)$, however, we can define a related sequence $\textrm{sa}(n)$ by
defining $\textrm{sa}(n)$ to have the same recurrence and initial conditions as
$a(n)$, except that $\textrm{sa}(2n)=\textrm{sa}(n)$. Growth is no longer a
problem: for each $n$ we construct recursively a set $\mathcal{SA}(n)$ of
partitions of $n$ such that the cardinality of $\mathcal{SA}(n)$ is
$\textrm{sa}(n)$. We study the properties of partitions in $\mathcal{SA}(n)$
and in each case we give non-recursive descriptions. We find congruences for
$\textrm{sa}(n)$ and also for $\textrm{psa}(n)$, the total number of parts in
all partitions in $\mathcal{SA}(n)$. | http://arxiv.org/abs/2303.11493v1 |
We propose StitchNet, a novel neural network creation paradigm that stitches
together fragments (one or more consecutive network layers) from multiple
pre-trained neural networks. StitchNet allows the creation of high-performing
neural networks without the large compute and data requirements needed under
traditional model creation processes via backpropagation training. We leverage
Centered Kernel Alignment (CKA) as a compatibility measure to efficiently guide
the selection of these fragments in composing a network for a given task
tailored to specific accuracy needs and computing resource constraints. We then
show that these fragments can be stitched together to create neural networks
with accuracy comparable to that of traditionally trained networks at a
fraction of computing resource and data requirements. Finally, we explore a
novel on-the-fly personalized model creation and inference application enabled
by this new paradigm. The code is available at
https://github.com/steerapi/stitchnet. | http://arxiv.org/abs/2301.01947v3 |
Despite decades of efforts to resolve, memory safety violations are still
persistent and problematic in modern systems. Various defense mechanisms have
been proposed, but their deployment in real systems remains challenging because
of performance, security, or compatibility concerns. In this paper, we propose
RV-CURE, a RISC-V capability architecture that implements full-system support
for full memory safety. For capability enforcement, we first propose a compiler
technique, data-pointer tagging (DPT), applicable to protecting all memory
types. It inserts a pointer tag in a pointer address and associates that tag
with the pointer's capability metadata. DPT enforces a capability check for
every memory access by a tagged pointer and thereby prevents illegitimate
memory accesses. Furthermore, we investigate and present lightweight hardware
extensions for DPT based on the open-source RISC-V BOOM processor. We observe
that a capability-execution pipeline can be implemented in parallel with the
existing memory-execution pipeline without intrusive modifications. With our
seamless hardware integration, we achieve low-cost capability checks
transparently performed in hardware. Altogether, we prototype RV-CURE as a
synthesized RTL processor and conduct full-system evaluations on FPGAs running
Linux OS. Our evaluations show that RV-CURE achieves strong memory safety at a
10.8% slowdown across the SPEC 2017 C/C++ workloads. | http://arxiv.org/abs/2308.02945v1 |
In modern day industry, clustering algorithms are daily routines of algorithm
engineers. Although clustering algorithms experienced rapid growth before 2010.
Innovation related to the research topic has stagnated after deep learning
became the de facto industrial standard for machine learning applications. In
2007, a density-based clustering algorithm named DENCLUE was invented to solve
clustering problem for nonlinear data structures. However, its parameter
selection problem was largely neglected until 2011. In this paper, we propose a
new approach to compute the optimal parameters for the DENCLUE algorithm, and
discuss its performance in the experiment section. | http://arxiv.org/abs/2307.03206v2 |
We present the first analysis in NGC2071-North as a resolved hub-filament
featuring a double centre. This $\sim 1.5 \times 1.5$ parsec-scale filament hub
contains $\sim$500 $M_\odot$. Seen from Planck, magnetic field lines may have
facilitated the gathering of material at this isolated location. The energy
balance analysis, supported by infalling gas signatures, reveal that these
filaments are currently forming stars. Herschel 100 $\mu$m emission
concentrates in the hub, at IRAS 05451+0037 and LkH$\alpha$ 316, and presents
diffuse lobes and loops around them. We suggest that such a double centre could
be formed, because the converging locations of filament pairs are offset, by
2.3$'$ (0.27 pc). This distance also matches the diameter of a hub-ring, seen
in column density and molecular tracers, such as HCO$^+$(1$-$0) and HCN(1$-$0),
that may indicate a transition and the connection between the hub and the
radiating filaments. We argue that all of the three components of the emission
star LkH$\alpha$ 316 are in physical association. We find that a $\sim$0.06
pc-sized gas loop, attached to IRAS 05451+0037, can be seen at wavelengths all
the way from Pan-STARRS-i to Herschel-100 $\mu$m. These observations suggest
that both protostars at the double hub centre are interacting with the cloud
material. In our $^{13}$CO data, we do not seem to find the outflow of this
region that was identified in the 80s with much lower resolution. | http://arxiv.org/abs/2301.00481v1 |
Although deep learning has achieved remarkable success in various scientific
machine learning applications, its opaque nature poses concerns regarding
interpretability and generalization capabilities beyond the training data.
Interpretability is crucial and often desired in modeling physical systems.
Moreover, acquiring extensive datasets that encompass the entire range of input
features is challenging in many physics-based learning tasks, leading to
increased errors when encountering out-of-distribution (OOD) data. In this
work, motivated by the field of functional data analysis (FDA), we propose
generalized functional linear models as an interpretable surrogate for a
trained deep learning model. We demonstrate that our model could be trained
either based on a trained neural network (post-hoc interpretation) or directly
from training data (interpretable operator learning). A library of generalized
functional linear models with different kernel functions is considered and
sparse regression is used to discover an interpretable surrogate model that
could be analytically presented. We present test cases in solid mechanics,
fluid mechanics, and transport. Our results demonstrate that our model can
achieve comparable accuracy to deep learning and can improve OOD generalization
while providing more transparency and interpretability. Our study underscores
the significance of interpretable representation in scientific machine learning
and showcases the potential of functional linear models as a tool for
interpreting and generalizing deep learning. | http://arxiv.org/abs/2307.04569v2 |
Optical pulses propagating in multimode optical fibers are affected by linear
disorder and nonlinearity, and experience chaotic exchange of power among
modes. On the other hand, complex systems can attain steady states
characterized by energy condensation into single as well multiple sub-systems.
In this work, we study beam propagation in multimode optical fibers in the
presence of linear random mode coupling and Kerr nonlinearity; both effects
lead to a mode power redistribution at the fiber output. We use a new 3D mode
decomposition method to obtain, with unprecedented accuracy, measurements of
the modal distribution from long spans of graded-index fiber; we perform
numerical simulations using a new model for the linear disorder; we introduce a
weighted Bose-Einstein law and show that it is suitable for describing
steady-state modal power distributions both in the linear and nonlinear
regimes. We show that, at power levels intermediate between the linear and the
soliton regimes, energy condensation is attained locally by the second, third
and fourth modal groups, before global condensation to the fundamental mode is
reached in the soliton regime. Our results extend the thermodynamic approach to
multimode fibers to unexplored optical states, which acquire the
characteristics of optical glass. | http://arxiv.org/abs/2306.15995v1 |
Counterfactual fairness requires that a person would have been classified in
the same way by an AI or other algorithmic system if they had a different
protected class, such as a different race or gender. This is an intuitive
standard, as reflected in the U.S. legal system, but its use is limited because
counterfactuals cannot be directly observed in real-world data. On the other
hand, group fairness metrics (e.g., demographic parity or equalized odds) are
less intuitive but more readily observed. In this paper, we use $\textit{causal
context}$ to bridge the gaps between counterfactual fairness, robust
prediction, and group fairness. First, we motivate counterfactual fairness by
showing that there is not necessarily a fundamental trade-off between fairness
and accuracy because, under plausible conditions, the counterfactually fair
predictor is in fact accuracy-optimal in an unbiased target distribution.
Second, we develop a correspondence between the causal graph of the
data-generating process and which, if any, group fairness metrics are
equivalent to counterfactual fairness. Third, we show that in three common
fairness contexts$\unicode{x2013}$measurement error, selection on label, and
selection on predictors$\unicode{x2013}$counterfactual fairness is equivalent
to demographic parity, equalized odds, and calibration, respectively.
Counterfactual fairness can sometimes be tested by measuring relatively simple
group fairness metrics. | http://arxiv.org/abs/2310.19691v1 |
Large Language Models (LLMs) have achieved remarkable results. However,
existing models are expensive to train and deploy, and it is also difficult to
expand their knowledge beyond pre-training data without forgetting previous
knowledge. This paper proposes a new neural network architecture, ModuleFormer,
that leverages modularity to improve the efficiency and flexibility of large
language models. ModuleFormer is based on the Sparse Mixture of Experts (SMoE).
Unlike the previous SMoE-based modular language model, which requires
domain-labeled data to learn domain-specific experts, ModuleFormer can induce
modularity from uncurated data with its new load balancing and concentration
losses. ModuleFormer is a modular architecture that includes two different
types of modules: new stick-breaking attention heads and feedforward experts.
Different modules are sparsely activated conditions on the input token during
training and inference. In our experiment, we found that the modular
architecture enables three important abilities for large pre-trained language
models: 1) Efficiency, since ModuleFormer only activates a subset of its
modules for each input token, thus it could achieve the same performance as
dense LLMs with more than two times throughput; 2) Extendability, ModuleFormer
is more immune to catastrophic forgetting than dense LLMs and can be easily
extended with new modules to learn new knowledge that is not included in the
training data; 3) Specialisation, finetuning ModuleFormer could specialize a
subset of modules to the finetuning task and the task-unrelated modules could
be easily pruned for a lightweight deployment. | http://arxiv.org/abs/2306.04640v2 |
Supervised neural approaches are hindered by their dependence on large,
meticulously annotated datasets, a requirement that is particularly cumbersome
for sequential tasks. The quality of annotations tends to deteriorate with the
transition from expert-based to crowd-sourced labelling. To address these
challenges, we present \textbf{CAMELL} (Confidence-based Acquisition Model for
Efficient self-supervised active Learning with Label validation), a pool-based
active learning framework tailored for sequential multi-output problems. CAMELL
possesses three core features: (1) it requires expert annotators to label only
a fraction of a chosen sequence, (2) it facilitates self-supervision for the
remainder of the sequence, and (3) it employs a label validation mechanism to
prevent erroneous labels from contaminating the dataset and harming model
performance. We evaluate CAMELL on sequential tasks, with a special emphasis on
dialogue belief tracking, a task plagued by the constraints of limited and
noisy datasets. Our experiments demonstrate that CAMELL outperforms the
baselines in terms of efficiency. Furthermore, the data corrections suggested
by our method contribute to an overall improvement in the quality of the
resulting datasets. | http://arxiv.org/abs/2310.08944v1 |
Compositionality is a pivotal property of symbolic reasoning. However, how
well recent neural models capture compositionality remains underexplored in the
symbolic reasoning tasks. This study empirically addresses this question by
systematically examining recently published pre-trained seq2seq models with a
carefully controlled dataset of multi-hop arithmetic symbolic reasoning. We
introduce a skill tree on compositionality in arithmetic symbolic reasoning
that defines the hierarchical levels of complexity along with three
compositionality dimensions: systematicity, productivity, and substitutivity.
Our experiments revealed that among the three types of composition, the models
struggled most with systematicity, performing poorly even with relatively
simple compositions. That difficulty was not resolved even after training the
models with intermediate reasoning steps. | http://arxiv.org/abs/2302.07866v1 |
The non-Hermitian skin effect is an iconic phenomenon characterized by the
aggregation of eigenstates near the system boundaries in non-Hermitian systems.
While extensively studied in one dimension, understanding the skin effect and
extending the non-Bloch band theory to higher dimensions encounters a
formidable challenge, primarily due to infinite lattice geometries or open
boundary conditions. This work adopts a point-gap perspective and unveils that
non-Hermitian skin effect in all spatial dimensions originates from point gaps.
We introduce the concept of uniform spectra and reveal that regardless of
lattice geometry, their energy spectra are universally given by the uniform
spectra, even though their manifestations of skin modes may differ. Building on
the uniform spectra, we demonstrate how to account for the skin effect with
generic lattice cuts and establish the connections of skin modes across
different geometric shapes via momentum-basis transformations. Our findings
highlight the pivotal roles point gaps play, offering a unified understanding
of the topological origin of non-Hermitian skin effect in all dimensions. | http://arxiv.org/abs/2306.12022v3 |
The Dadda algorithm is a parallel structured multiplier, which is quite
faster as compared to array multipliers, i.e., Booth, Braun, Baugh-Wooley, etc.
However, it consumes more power and needs a larger number of gates for hardware
implementation. In this paper, a modified-Dadda algorithm-based multiplier is
designed using a proposed half-adder-based carry-select adder with a binary to
excess-1 converter and an improved ripple-carry adder (RCA). The proposed
design is simulated in different technologies, i.e., Taiwan Semiconductor
Manufacturing Company (TSMC) 50nm, 90nm, and 120nm, and on different GHz
frequencies, i.e., 0.5, 1, 2, and 3.33GHz. Specifically, the 4-bit circuit of
the proposed design in TSMCs 50nm technology consumes 25uW of power at 3.33GHz
with 76ps of delay. The simulation results reveal that the design is faster,
more power-energy-efficient, and requires a smaller number of transistors for
implementation as compared to some closely related works. The proposed design
can be a promising candidate for low-power and low-cost digital controllers. In
the end, the design has been compared with recent relevant works in the
literature. | http://arxiv.org/abs/2307.05677v1 |
We consider the problem of service hosting where a service provider can
dynamically rent edge resources via short term contracts to ensure better
quality of service to its customers. The service can also be partially hosted
at the edge, in which case, customers' requests can be partially served at the
edge. The total cost incurred by the system is modeled as a combination of the
rent cost, the service cost incurred due to latency in serving customers, and
the fetch cost incurred as a result of the bandwidth used to fetch the
code/databases of the service from the cloud servers to host the service at the
edge. In this paper, we compare multiple hosting policies with regret as a
metric, defined as the difference in the cost incurred by the policy and the
optimal policy over some time horizon $T$. In particular we consider the Retro
Renting (RR) and Follow The Perturbed Leader (FTPL) policies proposed in the
literature and provide performance guarantees on the regret of these policies.
We show that under i.i.d stochastic arrivals, RR policy has linear regret while
FTPL policy has constant regret. Next, we propose a variant of FTPL, namely
Wait then FTPL (W-FTPL), which also has constant regret while demonstrating
much better dependence on the fetch cost. We also show that under adversarial
arrivals, RR policy has linear regret while both FTPL and W-FTPL have regret
$\mathrm{O}(\sqrt{T})$ which is order-optimal. | http://arxiv.org/abs/2303.06851v1 |
Masked autoencoder (MAE), a simple and effective self-supervised learning
framework based on the reconstruction of masked image regions, has recently
achieved prominent success in a variety of vision tasks. Despite the emergence
of intriguing empirical observations on MAE, a theoretically principled
understanding is still lacking. In this work, we formally characterize and
justify existing empirical insights and provide theoretical guarantees of MAE.
We formulate the underlying data-generating process as a hierarchical latent
variable model and show that under reasonable assumptions, MAE provably
identifies a set of latent variables in the hierarchical model, explaining why
MAE can extract high-level information from pixels. Further, we show how key
hyperparameters in MAE (the masking ratio and the patch size) determine which
true latent variables to be recovered, therefore influencing the level of
semantic information in the representation. Specifically, extremely large or
small masking ratios inevitably lead to low-level representations. Our theory
offers coherent explanations of existing empirical observations and provides
insights for potential empirical improvements and fundamental limitations of
the masking-reconstruction paradigm. We conduct extensive experiments to
validate our theoretical insights. | http://arxiv.org/abs/2306.04898v1 |
As high-speed, agile robots become more commonplace, these robots will have
the potential to better aid and collaborate with humans. However, due to the
increased agility and functionality of these robots, close collaboration with
humans can create safety concerns that alter team dynamics and degrade task
performance. In this work, we aim to enable the deployment of safe and
trustworthy agile robots that operate in proximity with humans. We do so by 1)
Proposing a novel human-robot doubles table tennis scenario to serve as a
testbed for studying agile, proximate human-robot collaboration and 2)
Conducting a user-study to understand how attributes of the robot (e.g., robot
competency or capacity to communicate) impact team dynamics, perceived safety,
and perceived trust, and how these latent factors affect human-robot
collaboration (HRC) performance. We find that robot competency significantly
increases perceived trust ($p<.001$), extending skill-to-trust assessments in
prior studies to agile, proximate HRC. Furthermore, interestingly, we find that
when the robot vocalizes its intention to perform a task, it results in a
significant decrease in team performance ($p=.037$) and perceived safety of the
system ($p=.009$). | http://arxiv.org/abs/2304.03756v1 |
In this paper, we describe Galois covers of algebraic curves and their
families by using local systems associated to push-forward of sheaves by the
structure morphism. More precisely, if $f:C\to Y$, we consider the sheaves
$f_*(\C)$. The group action by the Galois group $G$, yields a decomposition of
this sheaf into irreducible local systems corresponding to irreducible
representations of the group $G$. If $\rho$ is such an irreducible
representation, the eigensheaf $\L_{\rho}$ of $f_*(\C)$ gives rise to another
useful sheaf which is related to the homology group $H_1(C,\C)$. Using this, we
describe the action of the Galois group $G$ on the homology group. As a
particular example, we study the Dihedral covers of $\P^1$ in some detail. | http://arxiv.org/abs/2304.12883v2 |
Holistically measuring societal biases of large language models is crucial
for detecting and reducing ethical risks in highly capable AI models. In this
work, we present a Chinese Bias Benchmark dataset that consists of over 100K
questions jointly constructed by human experts and generative language models,
covering stereotypes and societal biases in 14 social dimensions related to
Chinese culture and values. The curation process contains 4 essential steps:
bias identification via extensive literature review, ambiguous context
generation, AI-assisted disambiguous context generation, snd manual review \&
recomposition. The testing instances in the dataset are automatically derived
from 3K+ high-quality templates manually authored with stringent quality
control. The dataset exhibits wide coverage and high diversity. Extensive
experiments demonstrate the effectiveness of the dataset in detecting model
bias, with all 10 publicly available Chinese large language models exhibiting
strong bias in certain categories. Additionally, we observe from our
experiments that fine-tuned models could, to a certain extent, heed
instructions and avoid generating outputs that are morally harmful in some
types, in the way of "moral self-correction". Our dataset and results are
publicly available at
\href{https://github.com/YFHuangxxxx/CBBQ}{https://github.com/YFHuangxxxx/CBBQ},
offering debiasing research opportunities to a widened community. | http://arxiv.org/abs/2306.16244v1 |
In this paper, we present a methodology that uses an optical tactile sensor
for efficient tactile exploration of embedded objects within soft materials.
The methodology consists of an exploration phase, where a probabilistic
estimate of the location of the embedded objects is built using a Bayesian
approach. The exploration phase is then followed by a mapping phase which
exploits the probabilistic map to reconstruct the underlying topography of the
workspace by sampling in more detail regions where there is expected to be
embedded objects. To demonstrate the effectiveness of the method, we tested our
approach on an experimental setup that consists of a series of quartz beads
located underneath a polyethylene foam that prevents direct observation of the
configuration and requires the use of tactile exploration to recover the
location of the beads. We show the performance of our methodology using ten
different configurations of the beads where the proposed approach is able to
approximate the underlying configuration. We benchmark our results against a
random sampling policy. | http://arxiv.org/abs/2308.11087v1 |
Web3Recommend is a decentralized Social Recommender System implementation
that enables Web3 Platforms on Android to generate recommendations that balance
trust and relevance. Generating recommendations in decentralized networks is a
non-trivial problem because these networks lack a global perspective due to the
absence of a central authority. Further, decentralized networks are prone to
Sybil Attacks in which a single malicious user can generate multiple fake or
Sybil identities. Web3Recommend relies on a novel graph-based content
recommendation design inspired by GraphJet, a recommendation system used in
Twitter enhanced with MeritRank, a decentralized reputation scheme that
provides Sybil-resistance to the system. By adding MeritRank's decay parameters
to the vanilla Social Recommender Systems' personalized SALSA graph algorithm,
we can provide theoretical guarantees against Sybil Attacks in the generated
recommendations. Similar to GraphJet, we focus on generating real-time
recommendations by only acting on recent interactions in the social network,
allowing us to cater temporally contextual recommendations while keeping a
tight bound on the memory usage in resource-constrained devices, allowing for a
seamless user experience. As a proof-of-concept, we integrate our system with
MusicDAO, an open-source Web3 music-sharing platform, to generate personalized,
real-time recommendations. Thus, we provide the first Sybil-resistant Social
Recommender System, allowing real-time recommendations beyond classic
user-based collaborative filtering. The system is also rigorously tested with
extensive unit and integration tests. Further, our experiments demonstrate the
trust-relevance balance of recommendations against multiple adversarial
strategies in a test network generated using data from real music platforms. | http://arxiv.org/abs/2307.01411v1 |
Metaverse has become a buzzword recently. Mobile augmented reality (MAR) is a
promising approach to providing users with an immersive experience in the
Metaverse. However, due to limitations of bandwidth, latency and computational
resources, MAR cannot be applied on a large scale in the Metaverse yet.
Moreover, federated learning, with its privacy-preserving characteristics, has
emerged as a prospective distributed learning framework in the future Metaverse
world. In this paper, we propose a federated learning assisted MAR system via
non-orthogonal multiple access for the Metaverse. Additionally, to optimize a
weighted sum of energy, latency and model accuracy, a resource allocation
algorithm is devised by setting appropriate transmission power, CPU frequency
and video frame resolution for each user. Experimental results demonstrate that
our proposed algorithm achieves an overall good performance compared to a
random algorithm and greedy algorithm. | http://arxiv.org/abs/2301.12085v2 |
We present canonical forms for all indecomposable pairs $(A,B)$ of commuting
nilpotent matrices over an arbitrary field under simultaneous similarity, where
$A$ is the direct sum of two Jordan blocks with distinct sizes. We also provide
the transformation matrix $X$ such that $(A, X^{-1}BX)$ is in its canonical
form. | http://arxiv.org/abs/2305.00176v1 |
Perfectly contractile graphs form a typical class of perfect graphs. In
particular, all $k$-colorings of a perfectly contractile graph are Kempe
equivalent. Everett and Reed conjectured that a graph is perfectly contractile
if and only if it contains no odd holes, no antiholes and no odd prisms. On the
other hand the authors and Shibata conjectured that a perfect graph is
perfectly contractile if and only if its toric ring, which is called the stable
set ring, is quadratic. In the present paper, we characterize when the stable
set ring of a (not necessarily perfect) graph is quadratic by using Kempe
equivalence. As applications of this characterization, we can claim that if
Everett and Reed conjecture is true, then the conjecture of the authors and
Shibata is also true. Moreover, we can show that for several important classes
of perfectly contractile graphs, the stable set rings are quadratic. | http://arxiv.org/abs/2303.12824v2 |
Recent theoretical developments in the description of jet evolution in the
quark gluon plasma have allowed to account for the effects of hydrodynamic
gradients in the medium modified jet spectra. These constitute a crucial step
towards using jets as tomographic probes of the nuclear matter they traverse.
In this work, we complement these studies by providing leading order
calculations of widely studied jet observables, taking into account matter
anisotropies. We show that the energy distribution inside a jet is pushed
towards the direction of the largest matter anisotropy, while the away region
is depleted. As a consequence, the jet mass and girth gain a non-trivial
azimuthal dependence, with the average value of the distribution increasing
along the direction of largest gradients. However, we find that, for these jet
shapes, matter anisotropic effects can be potentially suppressed by vacuum
Sudakov factors. We argue that the recently proposed measurements of energy
correlations within jets do not suffer from such effects, with the azimuthal
dependence being visible in a large angular window, regardless of the shape of
the distribution. | http://arxiv.org/abs/2308.01294v1 |
The properties of strongly-coupled lattice gauge theories at finite density
as well as in real time have largely eluded first-principles studies on the
lattice. This is due to the failure of importance sampling for systems with a
complex action. An alternative to evade the sign problem is quantum simulation.
Although still in its infancy, a lot of progress has been made in devising
algorithms to address these problems. In particular, recent efforts have
addressed the question of how to produce thermal Gibbs states on a quantum
computer. In this study, we apply a variational quantum algorithm to a
low-dimensional model which has a local abelian gauge symmetry. We demonstrate
how this approach can be applied to obtain information regarding the phase
diagram as well as unequal-time correlation functions at non-zero temperature. | http://arxiv.org/abs/2306.06057v1 |
The Malliavin differentiability of a SDE plays a crucial role in the study of
density smoothness and ergodicity among others. For Gaussian driven SDEs the
differentiability property is now well established.
In this paper, we consider the Malliavin differentiability for the Euler
scheme of such SDEs. We will focus on SDEs driven by fractional Brownian
motions (fBm), which is a very natural class of Gaussian processes.
We derive a uniform (in the step size $n$) path-wise upper-bound estimate for
the Euler scheme for stochastic differential equations driven by fBm with Hurst
parameter $H>1/3$ and its Malliavin derivatives. | http://arxiv.org/abs/2305.10365v1 |
Transformers have gained popularity in time series forecasting for their
ability to capture long-sequence interactions. However, their high memory and
computing requirements pose a critical bottleneck for long-term forecasting. To
address this, we propose TSMixer, a lightweight neural architecture exclusively
composed of multi-layer perceptron (MLP) modules for multivariate forecasting
and representation learning on patched time series. Inspired by MLP-Mixer's
success in computer vision, we adapt it for time series, addressing challenges
and introducing validated components for enhanced accuracy. This includes a
novel design paradigm of attaching online reconciliation heads to the MLP-Mixer
backbone, for explicitly modeling the time-series properties such as hierarchy
and channel-correlations. We also propose a novel Hybrid channel modeling and
infusion of a simple gating approach to effectively handle noisy channel
interactions and generalization across diverse datasets. By incorporating these
lightweight components, we significantly enhance the learning capability of
simple MLP structures, outperforming complex Transformer models with minimal
computing usage. Moreover, TSMixer's modular design enables compatibility with
both supervised and masked self-supervised learning methods, making it a
promising building block for time-series Foundation Models. TSMixer outperforms
state-of-the-art MLP and Transformer models in forecasting by a considerable
margin of 8-60%. It also outperforms the latest strong benchmarks of
Patch-Transformer models (by 1-2%) with a significant reduction in memory and
runtime (2-3X). The source code of our model is officially released as
PatchTSMixer in the HuggingFace. Model:
https://huggingface.co/docs/transformers/main/en/model_doc/patchtsmixer
Examples: https://github.com/ibm/tsfm/#notebooks-links | http://arxiv.org/abs/2306.09364v4 |
Text-to-3D modelling has seen exciting progress by combining generative
text-to-image models with image-to-3D methods like Neural Radiance Fields.
DreamFusion recently achieved high-quality results but requires a lengthy,
per-prompt optimization to create 3D objects. To address this, we amortize
optimization over text prompts by training on many prompts simultaneously with
a unified model, instead of separately. With this, we share computation across
a prompt set, training in less time than per-prompt optimization. Our framework
- Amortized text-to-3D (ATT3D) - enables knowledge-sharing between prompts to
generalize to unseen setups and smooth interpolations between text for novel
assets and simple animations. | http://arxiv.org/abs/2306.07349v1 |
Nearly thirty years ago, it was shown that $\Omega(\sqrt{n})$ registers are
needed to solve obstruction-free consensus among $n$ processes. This lower
bound was improved to $n$ registers in 2018, which exactly matches the best
upper bound. The $\Omega(\sqrt{n})$ space complexity lower bound actually
applies to a class of objects called historyless objects, which includes
registers, test-and-set objects, and readable swap objects. However, every
known $n$-process obstruction-free consensus algorithm from historyless objects
uses $\Omega (n)$ objects.
We give the first $\Omega (n)$ space complexity lower bounds on consensus
algorithms for two kinds of historyless objects. First, we show that any
obstruction-free consensus algorithm from swap objects uses at least $n-1$
objects. More generally, we prove that any obstruction-free $k$-set agreement
algorithm from swap objects uses at least $\lceil \frac{n}{k}\rceil - 1$
objects. This is the first non-constant lower bound on the space complexity of
solving $k$-set agreement with swap objects when $k > 1$. We also present an
obstruction-free $k$-set agreement algorithm from $n-k$ swap objects, exactly
matching our lower bound when $k=1$.
Second, we show that any obstruction-free binary consensus algorithm from
readable swap objects with domain size $b$ uses at least $\frac{n-2}{3b+1}$
objects. Since any historyless object can be simulated by a readable swap
object with the same domain, our results imply that any obstruction-free
consensus algorithm from historyless objects with domain size $b$ uses at least
$\frac{n-2}{3b+1}$ objects. For $b = 2$, we show a slightly better lower bound
of $n-2$. The best known obstruction-free binary consensus algorithm from
readable swap objects with domain size $2$ uses $2n-1$ objects, asymptotically
matching our lower bound. | http://arxiv.org/abs/2305.06507v2 |
We consider lexicographic bi-objective problems on Markov Decision Processes
(MDPs), where we optimize one objective while guaranteeing optimality of
another. We propose a two-stage technique for solving such problems when the
objectives are related (in a way that we formalize). We instantiate our
technique for two natural pairs of objectives: minimizing the (conditional)
expected number of steps to a target while guaranteeing the optimal probability
of reaching it; and maximizing the (conditional) expected average reward while
guaranteeing an optimal probability of staying safe (w.r.t. some safe set of
states). For the first combination of objectives, which covers the classical
frozen lake environment from reinforcement learning, we also report on
experiments performed using a prototype implementation of our algorithm and
compare it with what can be obtained from state-of-the-art probabilistic model
checkers solving optimal reachability. | http://arxiv.org/abs/2305.09634v2 |
In the wake of information overload in academia, methodologies and systems
for search, recommendation, and prediction to aid researchers in identifying
relevant research are actively studied and developed. Existing work, however,
is limited in terms of granularity, focusing only on the level of papers or a
single type of artifact, such as data sets. To enable more holistic analyses
and systems dealing with academic publications and their content, we propose
CoCon, a large scholarly data set reflecting the combined use of research
artifacts, contextualized in academic publications' full-text. Our data set
comprises 35 k artifacts (data sets, methods, models, and tasks) and 340 k
publications. We additionally formalize a link prediction task for "combined
research artifact use prediction" and provide code to utilize analyses of and
the development of ML applications on our data. All data and code is publicly
available at https://github.com/IllDepence/contextgraph. | http://arxiv.org/abs/2303.15193v1 |
Combining multiple gravitational-wave observations allows for stringent tests
of general relativity, targeting effects that would otherwise be undetectable
using single-event analyses. We highlight how the finite size of the observed
catalog induces a significant source of variance. If not appropriately
accounted for, general relativity can be excluded with arbitrarily large
credibility even if it is the underlying theory of gravity. This effect is
generic and holds for arbitrarily large catalogs. Moreover, we show that it
cannot be suppressed by selecting "golden" observations with large
signal-to-noise ratios. We present a mitigation strategy based on bootstrapping
(i.e. resampling with repetition) that allows assigning uncertainties to one's
credibility on the targeted test. We demonstrate our findings using both toy
models and real gravitational-wave data. In particular, we quantify the impact
of the catalog variance on the ringdown properties of black holes using the
latest LIGO/Virgo catalog. | http://arxiv.org/abs/2310.03811v2 |
The postulate of universal Weyl conformal symmetry for all elementary
physical fields introduces nonclassical gravitational effects in both conformal
gravitation(CG) and the conformal Higgs model (CHM). The resulting theory is
found to explain major observed phenomena including excessive galactic rotation
velocities and accelerating Hubble expansion, without invoking dark matter
(DM). The recent history of this development is surveyed here. Implications of
the theory include galactic baryonic Tully-Fisher relations and dark galactic
haloes of definite large radius. Cosmological CHM parameters exclude a massive
Higgs boson but are consistent with a novel alternative particle of the
observed mass. | http://arxiv.org/abs/2308.10399v2 |
The cosmological principle is one of the fundamental assumptions of the
standard model of Cosmology (SCM), and it allow us to describe cosmic distances
and clocks by using the Friedmann-Lema$\rm{\hat{{\i}}}$tre-Roberton-Walker
(FLRW) metric. Thus, it is essential to test the FLRW metric with cosmological
observations to verify the validity of the SCM. In this work, we perform tests
of the FLRW metric by comparing the observational comoving angles between the
Hubble $H(z)$ and angular Baryon Acoustic Oscillation (BAO) measurements. The
Gaussian process is employed to reconstruct the Hubble $H(z)$ measurements and
the angular diameter distance (ADD) from the transversal BAO data. A
non-parametric method is adopted to probe the possible deviations from the FLRW
metric at any redshift by comparing the comoving distances from the
reconstructed Hubble $H(z)$ measurements with the ADD reconstructed from the
transversal BAO data. Then, we propose two types of parameterizations for the
deviations from the FLRW metric, and test the FLRW metric by using the priors
of specific sound horizon scales. To avoid the bias caused by the prior of a
specific sound horizon scale, we perform the consistency test with a flat prior
of the sound horizon scale. We find that there a concordance between the FLRW
metric and the observational data by using parametric and non-parametric
methods, and the parameterizations can be employed to test the FLRW metric in a
new way independent of the sound horizon scale. | http://arxiv.org/abs/2305.01268v2 |
Traditionally, the solar activity cycle is thought as an interplay of the
main dipole component of the solar poloidal magnetic field and the toroidal
magnetic field. However, the real picture as presented in the extended
solar-cycle models is much more complicated. Here, we develop the concept of
the extended solar cycle clarifying what zonal harmonics are responsible for
the equatorward and polarward propagating features in the surface activity
tracers. We arrive at a conclusion that the zonal harmonics with L = 5 play a
crucial role in separating the phenomena of both types, which are associated
with the odd zonal harmonics. Another objective of our analysis is the role of
even zonal harmonics, which prove to be rather associated with the North-South
asymmetry of the solar activity than with its 11-year solar periodicity. | http://arxiv.org/abs/2305.19427v1 |
In this paper, we propose a speaker verification method by an Attentive
Multi-scale Convolutional Recurrent Network (AMCRN). The proposed AMCRN can
acquire both local spatial information and global sequential information from
the input speech recordings. In the proposed method, logarithm Mel spectrum is
extracted from each speech recording and then fed to the proposed AMCRN for
learning speaker embedding. Afterwards, the learned speaker embedding is fed to
the back-end classifier (such as cosine similarity metric) for scoring in the
testing stage. The proposed method is compared with state-of-the-art methods
for speaker verification. Experimental data are three public datasets that are
selected from two large-scale speech corpora (VoxCeleb1 and VoxCeleb2).
Experimental results show that our method exceeds baseline methods in terms of
equal error rate and minimal detection cost function, and has advantages over
most of baseline methods in terms of computational complexity and memory
requirement. In addition, our method generalizes well across truncated speech
segments with different durations, and the speaker embedding learned by the
proposed AMCRN has stronger generalization ability across two back-end
classifiers. | http://arxiv.org/abs/2306.00426v1 |
The previous SpEx+ has yielded outstanding performance in speaker extraction
and attracted much attention. However, it still encounters inadequate
utilization of multi-scale information and speaker embedding. To this end, this
paper proposes a new effective speaker extraction system with multi-scale
interfusion and conditional speaker modulation (ConSM), which is called
MC-SpEx. First of all, we design the weight-share multi-scale fusers
(ScaleFusers) for efficiently leveraging multi-scale information as well as
ensuring consistency of the model's feature space. Then, to consider different
scale information while generating masks, the multi-scale interactive mask
generator (ScaleInterMG) is presented. Moreover, we introduce ConSM module to
fully exploit speaker embedding in the speech extractor. Experimental results
on the Libri2Mix dataset demonstrate the effectiveness of our improvements and
the state-of-the-art performance of our proposed MC-SpEx. | http://arxiv.org/abs/2306.16250v1 |
Annotations play a vital role in highlighting critical aspects of
visualizations, aiding in data externalization and exploration, collaborative
sensemaking, and visual storytelling. However, despite their widespread use, we
identified a lack of a design space for common practices for annotations. In
this paper, we evaluated over 1,800 static annotated charts to understand how
people annotate visualizations in practice. Through qualitative coding of these
diverse real-world annotated charts, we explored three primary aspects of
annotation usage patterns: analytic purposes for chart annotations (e.g.,
present, identify, summarize, or compare data features), mechanisms for chart
annotations (e.g., types and combinations of annotations used, frequency of
different annotation types across chart types, etc.), and the data source used
to generate the annotations. We then synthesized our findings into a design
space of annotations, highlighting key design choices for chart annotations. We
presented three case studies illustrating our design space as a practical
framework for chart annotations to enhance the communication of visualization
insights. All supplemental materials are available at
{https://shorturl.at/bAGM1}. | http://arxiv.org/abs/2306.06043v2 |
Time-Lock Puzzles (TLPs) are cryptographic protocols that enable a client to
lock a message in such a way that a server can only unlock it after a specific
time period. However, existing TLPs have certain limitations: (i) they assume
that both the client and server always possess sufficient computational
resources and (ii) they solely focus on the lower time bound for finding a
solution, disregarding the upper bound that guarantees a regular server can
find a solution within a certain time frame. Additionally, existing TLPs
designed to handle multiple puzzles either (a) entail high verification costs
or (b) lack generality, requiring identical time intervals between consecutive
solutions. To address these limitations, this paper introduces, for the first
time, the concept of a "Delegated Time-Lock Puzzle" and presents a protocol
called "Efficient Delegated Time-Lock Puzzle" (ED-TLP) that realises this
concept. ED-TLP allows the client and server to delegate their
resource-demanding tasks to third-party helpers. It facilitates real-time
verification of solution correctness and efficiently handles multiple puzzles
with varying time intervals. ED-TLP ensures the delivery of solutions within
predefined time limits by incorporating both an upper bound and a fair payment
algorithm. We have implemented ED-TLP and conducted a comprehensive analysis of
its overheads, demonstrating the efficiency of the construction. | http://arxiv.org/abs/2308.01280v1 |
The origin and evolution of structure in the Universe could be studied in the
Dark Ages. The highly redshifted HI signal between 30 < z < 80 is the only
observable signal from this era. Human radio interference and ionospheric
effects limit Earth-based radio astronomy to frequencies > 30 MHz. To observe
the low-frequency window with research from compact steep spectrum sources,
pulsars, and solar activity, a 200 km baseline lunar far-side radio
interferometer has been much discussed. This paper conducts a preliminary site
survey of potential far-side craters, which are few in number on the
mountainous lunar far-side. Based on LRO LOLA data, 200 m resolution
topographic maps of eight far-side sites were produced, and slope and roughness
maps were derived from them. A figure of merit was created to determine the
optimum site. Three sites are identified as promising. There is a need to
protect these sites for astronomy. | http://arxiv.org/abs/2307.11616v1 |
It is shown that the constant $c_{d,3}$ in von Neumann's inequality for
d-tuples of commutative and row contractive $3\times3$ matrices, as proved by
Hartz, Richter, and Shalit in [2], is independent of the size of the d-tuple. A
numerical estimation of the constant is provided. | http://arxiv.org/abs/2310.12908v1 |
The properties of metallic systems with important and structured excitations
at low energies, such as Cu, are challenging to describe with simple models
like the plasmon pole approximation (PPA), and more accurate and sometimes
prohibitive full frequency approaches are usually required. In this paper we
propose a numerical approach to $GW$ calculations on metals that takes into
account the frequency dependence of the screening via the multipole
approximation (MPA), an accurate and efficient alternative to current
full-frequency methods that was recently developed and validated for
semiconductors and overcomes several limitations of PPA. We now demonstrate
that MPA can be successfully extended to metallic systems by optimizing the
frequency sampling for this class of materials and introducing a simple method
to include the $\mathbf{q}\to 0$ limit of the intra-band contributions. The
good agreement between MPA and full frequency results for the calculations of
quasi-particle energies, polarizability, self-energy and spectral functions in
different metallic systems confirms the accuracy and computational efficiency
of the method. Finally, we discuss the physical interpretation of the MPA poles
through a comparison with experimental electron energy loss spectra for Cu. | http://arxiv.org/abs/2301.02282v1 |
Small collision systems, e.g. $p$-$p$ and $p$-Pb collisions, comprise a
potential reference for more-central A-A collisions with regard to production
(or not) of a thermalized quark-gluon plasma (QGP). Small systems with low
particle densities should evolve according to simple QCD mechanisms including
projectile-nucleon dissociation and dijet production. But it is now claimed
that QGP may appear even in $p$-$p$ collisions based on apparent evidence for
radial flow from shape evolution of $p_t$ spectra and from variation of total
yields for strange and multistrange hadrons relative to statistical models. The
present study confronts such arguments with a detailed analysis of $p_t$
spectra for strange and multistrange hadrons from 5 TeV $p$-Pb collisions and
13 TeV $p$-$p$ collisions via a two-component model (TCM) of hadron production.
Based on previous analysis of lighter hadrons the TCM accurately predicts
spectra for Cascade and Omega hadrons. Significant results include multistrange
hadron spectra dominated by jet fragments, variation of strange-hadron
abundances exaggerated by certain plot formats and spectrum extrapolations, and
detailed relations between ensemble-mean $\bar p_t$ evolution with event charge
density and small shifts of jet fragment distributions on $p_t$. Within the
context of the TCM, $p$-$p$ and $p$-Pb collision systems with comparable jet
contributions are found to be equivalent within data uncertainties. Attribution
of certain data features to radial flow is falsified. | http://arxiv.org/abs/2303.14299v1 |
Burgers' equation is an important mathematical model used to study gas
dynamics and traffic flow, among many other applications. Previous analysis of
solutions to Burgers' equation shows an infinite stream of simple poles born at
t = 0^+, emerging rapidly from the singularities of the initial condition, that
drive the evolution of the solution for t > 0.
We build on this work by applying exponential asymptotics and transseries
methodology to an ordinary differential equation that governs the small-time
behaviour in order to derive asymptotic descriptions of these poles and
associated zeros.
Our analysis reveals that subdominant exponentials appear in the solution
across Stokes curves; these exponentials become the same size as the leading
order terms in the asymptotic expansion along anti-Stokes curves, which is
where the poles and zeros are located. In this region of the complex plane, we
write a transseries approximation consisting of nested series expansions. By
reversing the summation order in a process known as transasymptotic summation,
we study the solution as the exponentials grow, and approximate the pole and
zero location to any required asymptotic accuracy.
We present the asymptotic methods in a systematic fashion that should be
applicable to other nonlinear differential equations. | http://arxiv.org/abs/2307.10508v1 |
We consider a gauge-invariant Ginzburg-Landau functional (also known as
Abelian Yang-Mills-Higgs model) on Hermitian line bundles over closed
Riemannian manifolds of dimension $n \geq 3$. Assuming a logarithmic energy
bound in the coupling parameter, we study the asymptotic behaviour of critical
points in the non-self dual scaling, as the coupling parameter tends to zero.
After a convenient choice of the gauge, we show compactness of finite-energy
critical points in Sobolev norms. Moreover, %independently of the gauge
andthanks to a suitable monotonicity formula,we prove that the energy densities
of critical points, rescaled by the logarithm of the coupling parameter,
concentrate towards the weight measure of a stationary, rectifiable varifold of
codimension~2. | http://arxiv.org/abs/2304.11346v2 |
Medical vision-language models enable co-learning and integrating features
from medical imaging and clinical text. However, these models are not easy to
train and the latent representation space can be complex. Here we propose a
novel way for pre-training and regularising medical vision-language models. The
proposed method, named Medical vision-language pre-training with Frozen
language models and Latent spAce Geometry optimization (M-FLAG), leverages a
frozen language model for training stability and efficiency and introduces a
novel orthogonality loss to harmonize the latent space geometry. We demonstrate
the potential of the pre-trained model on three downstream tasks: medical image
classification, segmentation, and object detection. Extensive experiments
across five public datasets demonstrate that M-FLAG significantly outperforms
existing medical vision-language pre-training approaches and reduces the number
of parameters by 78\%. Notably, M-FLAG achieves outstanding performance on the
segmentation task while using only 1\% of the RSNA dataset, even outperforming
ImageNet pre-trained models that have been fine-tuned using 100\% of the data. | http://arxiv.org/abs/2307.08347v2 |
Metaverse is an immersive shared space that remote users can access through
virtual and augmented reality interfaces, enabling their avatars to interact
with each other and the surrounding. Although digital objects can be
manipulated, physical objects cannot be touched, grasped, or moved within the
metaverse due to the lack of a suitable interface. This work proposes a
solution to overcome this limitation by introducing the concept of a Physical
Metaverse enabled by a new interface named "Avatarm". The Avatarm consists in
an avatar enhanced with a robotic arm that performs physical manipulation tasks
while remaining entirely hidden in the metaverse. The users have the illusion
that the avatar is directly manipulating objects without the mediation by a
robot. The Avatarm is the first step towards a new metaverse, the "Physical
Metaverse", where users can physically interact each other and with the
environment. | http://arxiv.org/abs/2303.15187v2 |
This paper explores the problem of selecting sensor nodes for a general class
of nonlinear dynamical networks. In particular, we study the problem by
utilizing altered definitions of observability and open-loop lifted observers.
The approach is performed by discretizing the system's dynamics using the
implicit Runge-Kutta method and by introducing a state-averaged observability
measure. The observability measure is computed for a number of perturbed
initial states in the vicinity of the system's true initial state. The sensor
node selection problem is revealed to retain the submodular and modular
properties of the original problem. This allows the problem to be solved
efficiently using a greedy algorithm with a guaranteed performance bound while
showing an augmented robustness to unknown or uncertain initial conditions. The
validity of this approach is numerically demonstrated on a $H_{2}/O_{2}$
combustion reaction network. | http://arxiv.org/abs/2307.07074v1 |
Signal Temporal Logic (STL) has become a popular tool for expressing formal
requirements of Cyber-Physical Systems (CPS). The problem of verifying STL
properties of neural network-controlled CPS remains a largely unexplored
problem. In this paper, we present a model for the verification of Neural
Network (NN) controllers for general STL specifications using a custom neural
architecture where we map an STL formula into a feed-forward neural network
with ReLU activation. In the case where both our plant model and the controller
are ReLU-activated neural networks, we reduce the STL verification problem to
reachability in ReLU neural networks. We also propose a new approach for neural
network controllers with general activation functions; this approach is a sound
and complete verification approach based on computing the Lipschitz constant of
the closed-loop control system. We demonstrate the practical efficacy of our
techniques on a number of examples of learning-enabled control systems. | http://arxiv.org/abs/2303.05394v1 |
We introduce the problem of ranking with slot constraints, which can be used
to model a wide range of application problems -- from college admission with
limited slots for different majors, to composing a stratified cohort of
eligible participants in a medical trial. We show that the conventional
Probability Ranking Principle (PRP) can be highly sub-optimal for
slot-constrained ranking problems, and we devise a new ranking algorithm,
called MatchRank. The goal of MatchRank is to produce rankings that maximize
the number of filled slots if candidates are evaluated by a human decision
maker in the order of the ranking. In this way, MatchRank generalizes the PRP,
and it subsumes the PRP as a special case when there are no slot constraints.
Our theoretical analysis shows that MatchRank has a strong approximation
guarantee without any independence assumptions between slots or candidates.
Furthermore, we show how MatchRank can be implemented efficiently. Beyond the
theoretical guarantees, empirical evaluations show that MatchRank can provide
substantial improvements over a range of synthetic and real-world tasks. | http://arxiv.org/abs/2310.17870v1 |
Corporate Cloud transformation is expected to continue to grow double-digit
each of the next few years. This growth is augmented by digital transformation,
which itself is gaining huge momentum due to the recent consumer behavior
trends and especially the COVID pandemic. It is also estimated that globally
billions of dollars are wasted due to efficiencies in the way cloud migrations
are launched and handled. This paper discusses a framework using which
organizations can successfully execute cloud transformation. | http://arxiv.org/abs/2304.05333v1 |
We propose to search for a new type of gravitational wave signature relevant
for particle physics models with symmetries broken at vastly different energy
scales. The spectrum contains a characteristic double-peak structure consisting
of a sharp peak from domain walls and a smooth bump from a first order phase
transition in the early Universe. We demonstrate how such a gravitational wave
signal arises in a new theory unifying baryon number and color into an SU(4)
gauge group broken at the multi-TeV scale, and with lepton number promoted to
an SU(2) gauge symmetry broken at the multi-EeV scale. The model contains two
types of dark matter particles, explains the observed domination of matter over
antimatter in the Universe, and accommodates nonzero neutrino masses. We
discuss how future gravitational wave experiments, such as LISA, Big Bang
Observer, DECIGO, Einstein Telescope, and Cosmic Explorer, can be utilized to
look for this novel signature. | http://arxiv.org/abs/2305.12566v1 |
Artificial intelligence (AI) governance is the body of standards and
practices used to ensure that AI systems are deployed responsibly. Current AI
governance approaches consist mainly of manual review and documentation
processes. While such reviews are necessary for many systems, they are not
sufficient to systematically address all potential harms, as they do not
operationalize governance requirements for system engineering, behavior, and
outcomes in a way that facilitates rigorous and reproducible evaluation. Modern
AI systems are data-centric: they act on data, produce data, and are built
through data engineering. The assurance of governance requirements must also be
carried out in terms of data. This work explores the systematization of
governance requirements via datasets and algorithmic evaluations. When applied
throughout the product lifecycle, data-centric governance decreases time to
deployment, increases solution quality, decreases deployment risks, and places
the system in a continuous state of assured compliance with governance
requirements. | http://arxiv.org/abs/2302.07872v1 |
Nuclear energy has been gaining momentum recently as one of the solutions to
tackle climate change. However, significant environmental and health-risk
concerns remain associated with potential accidents. Despite significant
preventive efforts, we must acknowledge that accidents may happen and,
therefore, develop strategies and technologies for mitigating their
consequences. In this paper, we review the Fukushima Dai-ichi Nuclear Power
Plant accident, synthesize the time series and accident progressions across
relevant disciplines, including in-plant physics and engineering systems,
operators' actions, emergency responses, meteorology, radionuclide release and
transport, land contamination, and health impacts. In light of the latest
observations and simulation studies, we identify three key factors that
exacerbated the consequences of the accident: (1) the failure of Unit 2
containment venting, (2) the insufficient integration of radiation measurements
and meteorology data in the evacuation strategy, and (3) the limited risk
assessment and emergency preparedness. We propose new research and development
directions to improve the resilience of nuclear power plants, including (1)
meteorology-informed proactive venting, (2) machine learning-enabled adaptive
evacuation zones, and (3) comprehensive risk-informed emergency planning while
leveraging the experience from responses to other disasters. | http://arxiv.org/abs/2303.08868v1 |
Deep learning is emerging as an effective tool in drug discovery, with
potential applications in both predictive and generative models. Generative
Flow Networks (GFlowNets/GFNs) are a recently introduced method recognized for
the ability to generate diverse candidates, in particular in small molecule
generation tasks. In this work, we introduce double GFlowNets (DGFNs). Drawing
inspiration from reinforcement learning and Double Deep Q-Learning, we
introduce a target network used to sample trajectories, while updating the main
network with these sampled trajectories. Empirical results confirm that DGFNs
effectively enhance exploration in sparse reward domains and high-dimensional
state spaces, both challenging aspects of de-novo design in drug discovery. | http://arxiv.org/abs/2310.19685v3 |
We show that the connected correlators of partition functions in double
scaled SYK model can be decomposed into ``trumpet'' and the discrete analogue
of the Weil-Petersson volume, which was defined by Norbury and Scott. We
explicitly compute this discrete volume for the first few orders in the genus
expansion and confirm that the discrete volume reduces to the Weil-Petersson
volume in a certain semi-classical limit. | http://arxiv.org/abs/2306.15981v2 |
In real life, adversarial attack to deep learning models is a fatal security
issue. However, the issue has been rarely discussed in a widely used
class-incremental continual learning (CICL). In this paper, we address problems
of applying adversarial training to CICL, which is well-known defense method
against adversarial attack. A well-known problem of CICL is class-imbalance
that biases a model to the current task by a few samples of previous tasks.
Meeting with the adversarial training, the imbalance causes another imbalance
of attack trials over tasks. Lacking clean data of a minority class by the
class-imbalance and increasing of attack trials from a majority class by the
secondary imbalance, adversarial training distorts optimal decision boundaries.
The distortion eventually decreases both accuracy and robustness than
adversarial training. To exclude the effects, we propose a straightforward but
significantly effective method, External Adversarial Training (EAT) which can
be applied to methods using experience replay. This method conduct adversarial
training to an auxiliary external model for the current task data at each time
step, and applies generated adversarial examples to train the target model. We
verify the effects on a toy problem and show significance on CICL benchmarks of
image classification. We expect that the results will be used as the first
baseline for robustness research of CICL. | http://arxiv.org/abs/2305.13678v1 |
Natural language processing and 2D vision models have attained remarkable
proficiency on many tasks primarily by escalating the scale of training data.
However, 3D vision tasks have not seen the same progress, in part due to the
challenges of acquiring high-quality 3D data. In this work, we present
Objaverse-XL, a dataset of over 10 million 3D objects. Our dataset comprises
deduplicated 3D objects from a diverse set of sources, including manually
designed objects, photogrammetry scans of landmarks and everyday items, and
professional scans of historic and antique artifacts. Representing the largest
scale and diversity in the realm of 3D datasets, Objaverse-XL enables
significant new possibilities for 3D vision. Our experiments demonstrate the
improvements enabled with the scale provided by Objaverse-XL. We show that by
training Zero123 on novel view synthesis, utilizing over 100 million multi-view
rendered images, we achieve strong zero-shot generalization abilities. We hope
that releasing Objaverse-XL will enable further innovations in the field of 3D
vision at scale. | http://arxiv.org/abs/2307.05663v1 |
In this work, we study the chiral and deconfinement phase transitions in a
two-flavor Polyakov loop extended Nambu--Jona-Lasinio (PNJL) model. And note
that the self-consistent mean field approximation is employed by introducing an
arbitrary parameter $\alpha$ to measure the weights of the Fierz-transformed
interaction channels. By making use of this model, we systematically
investigate the chiral and deconfinement phase transition lines (as well as the
chiral ones in the NJL model for comparison) under different values of
$\alpha$. It is found that, the increasing of $\alpha$ helps to enhance the
chiral (pseudo)critical temperature at fixed chemical potential, and also to
enhance the chiral (pseudo)critical chemical potential at fixed temperature.
And the critical end point (CEP) vanishes when $\alpha$ becomes large enough.
Besides, we find that the incorporation of Polyakov loop increases $T_{CEP}$
but does not change $\mu_{CEP}$ for small values of $\alpha$. | http://arxiv.org/abs/2306.12036v1 |
In the literature, there are various notions of stochasticity which measure
how well an algorithmically random set satisfies the law of large numbers. Such
notions can be categorized by disorder and adaptability: adaptive strategies
may use information observed about the set when deciding how to act, and
disorderly strategies may act out of order. In the disorderly setting, adaptive
strategies are more powerful than non-adaptive ones. In the adaptive setting,
Merkle et al. showed that disorderly strategies are more powerful than orderly
ones. This leaves open the question of how disorderly, non-adaptive strategies
compare to orderly, adaptive strategies, as well as how both relate to orderly,
non-adaptive strategies. In this paper, we show that orderly, adaptive
strategies and disorderly, non-adaptive strategies are both strictly more
powerful than orderly, non-adaptive strategies. Using the techniques developed
to prove this, we also make progress towards the former open question by
introducing a notion of orderly, ``weakly adaptable'' strategies which we prove
is incomparable with disorderly, non-adaptive strategies. | http://arxiv.org/abs/2306.02225v2 |
Low-rank approximation of tensors has been widely used in high-dimensional
data analysis. It usually involves singular value decomposition (SVD) of
large-scale matrices with high computational complexity. Sketching is an
effective data compression and dimensionality reduction technique applied to
the low-rank approximation of large matrices. This paper presents two practical
randomized algorithms for low-rank Tucker approximation of large tensors based
on sketching and power scheme, with a rigorous error-bound analysis. Numerical
experiments on synthetic and real-world tensor data demonstrate the competitive
performance of the proposed algorithms. | http://arxiv.org/abs/2301.11598v1 |
In the field of robotics, robot teleoperation for remote or hazardous
environments has become increasingly vital. A major challenge is the lag
between command and action, negatively affecting operator awareness,
performance, and mental strain. Even with advanced technology, mitigating these
delays, especially in long-distance operations, remains challenging. Current
solutions largely focus on machine-based adjustments. Yet, there's a gap in
using human perceptions to improve the teleoperation experience. This paper
presents a unique method of sensory manipulation to help humans adapt to such
delays. Drawing from motor learning principles, it suggests that modifying
sensory stimuli can lessen the perception of these delays. Instead of
introducing new skills, the approach uses existing motor coordination
knowledge. The aim is to minimize the need for extensive training or complex
automation. A study with 41 participants explored the effects of altered haptic
cues in delayed teleoperations. These cues were sourced from advanced physics
engines and robot sensors. Results highlighted benefits like reduced task time
and improved perceptions of visual delays. Real-time haptic feedback
significantly contributed to reduced mental strain and increased confidence.
This research emphasizes human adaptation as a key element in robot
teleoperation, advocating for improved teleoperation efficiency via swift human
adaptation, rather than solely optimizing robots for delay adjustment. | http://arxiv.org/abs/2310.08788v1 |
The purpose of this paper is to present new classes of function systems as
part of multiresolution analyses. Our approach is representation theoretic, and
it makes use of generalized multiresolution function systems (MRSs). It further
entails new ideas from measurable endomorphisms-dynamics. Our results yield
applications that are not amenable to more traditional techniques used on
metric spaces. As the main tool in our approach, we make precise new classes of
generalized MRSs which arise directly from a dynamical theory approach to the
study of surjective endomorphisms on measure spaces. In particular, we give the
necessary and sufficient conditions for a family of functions to define
generators of Cuntz relations. We find an explicit description of the set of
generalized wavelet filters. Our results are motivated in part by analyses of
sub-band filters in signal/image processing. But our paper goes further, and it
applies to such wider contexts as measurable dynamical systems, and complex
dynamics.
A unifying theme in our results is a new analysis of endomorphisms in general
measure space, and its connection to multi-resolutions, to representation
theory, and generalized wavelet systems. | http://arxiv.org/abs/2304.14558v1 |
Script identification and text recognition are some of the major domains in
the application of Artificial Intelligence. In this era of digitalization, the
use of digital note-taking has become a common practice. Still, conventional
methods of using pen and paper is a prominent way of writing. This leads to the
classification of scripts based on the method they are obtained. A survey on
the current methodologies and state-of-art methods used for processing and
identification would prove beneficial for researchers. The aim of this article
is to discuss the advancement in the techniques for script pre-processing and
text recognition. In India there are twelve prominent Indic scripts, unlike the
English language, these scripts have layers of characteristics. Complex
characteristics such as similarity in text shape make them difficult to
recognize and analyze, thus this requires advance preprocessing methods for
their accurate recognition. A sincere attempt is made in this survey to provide
a comparison between all algorithms. We hope that this survey would provide
insight to a researcher working not only on Indic scripts but also other
languages. | http://arxiv.org/abs/2308.05780v1 |
High-quality machine learning models are dependent on access to high-quality
training data. When the data are not already available, it is tedious and
costly to obtain them. Data markets help with identifying valuable training
data: model consumers pay to train a model, the market uses that budget to
identify data and train the model (the budget allocation problem), and finally
the market compensates data providers according to their data contribution
(revenue allocation problem). For example, a bank could pay the data market to
access data from other financial institutions to train a fraud detection model.
Compensating data contributors requires understanding data's contribution to
the model; recent efforts to solve this revenue allocation problem based on the
Shapley value are inefficient to lead to practical data markets.
In this paper, we introduce a new algorithm to solve budget allocation and
revenue allocation problems simultaneously in linear time. The new algorithm
employs an adaptive sampling process that selects data from those providers who
are contributing the most to the model. Better data means that the algorithm
accesses those providers more often, and more frequent accesses corresponds to
higher compensation. Furthermore, the algorithm can be deployed in both
centralized and federated scenarios, boosting its applicability. We provide
theoretical guarantees for the algorithm that show the budget is used
efficiently and the properties of revenue allocation are similar to Shapley's.
Finally, we conduct an empirical evaluation to show the performance of the
algorithm in practical scenarios and when compared to other baselines. Overall,
we believe that the new algorithm paves the way for the implementation of
practical data markets. | http://arxiv.org/abs/2306.02543v1 |
Due to turbulence in the atmosphere images taken from ground-based telescopes
become distorted. With adaptive optics (AO) images can be given greater clarity
allowing for better observations with existing telescopes and are essential for
ground-based coronagraphic exoplanet imaging instruments. A disadvantage to
many AO systems is that they use sensors that can not correct for non-common
path aberrations. We have developed a new focal plane wavefront sensing
technique to address this problem called deformable mirror (DM)-based pupil
chopping. The process involves a coronagraphic or non-coronagraphic science
image and a deformable mirror, which modulates the phase by applying a local
tip/tilt every other frame which enables correcting for leftover aberrations in
the wavefront after a conventional AO correction. We validate this technique
with both simulations (for coronagraphic and non-coronagraphic images) and
testing (for non-coronagraphic images) on UCSC's Santa Cruz Extreme AO
Laboratory (SEAL) testbed. We demonstrate that with as low as 250 nm of DM
stroke to apply the local tip/tilt this wavefront sensor is linear for
low-order Zernike modes and enables real-time control, in principle up to kHz
speeds to correct for residual atmospheric turbulence. | http://arxiv.org/abs/2308.14855v1 |
We report the first experimental observation of multiple standing spin modes
in 3D optomagnonic nanocavity formed by nanometer-sized iron-garnet
nanocylinder. We show that launching of standing spin modes is achieved due to
a high confinement of the optically generated effective magnetic field caused
by the localized optical resonance. Quantization and spin-wave mode
inhomogeneity is achieved in each of the three spatial dimensions. The
presented approach opens new horizons of 3D optomagnonics by combining
nanophotonic and magnonic functionalities within a single nanocavity. | http://arxiv.org/abs/2310.01974v1 |
In recent work, Darmon, Pozzi and Vonk explicitly construct a modular form
whose spectral coefficients are $p$-adic logarithms of Gross-Stark units and
Stark-Heegner points. Here we describe how this construction gives rise to a
practical algorithm for explicitly computing these logarithms to specified
precision, and how to recover the exact values of the Gross-Stark units and
Stark-Heegner points from them. Key tools are overconvergent modular forms,
reduction theory of quadratic forms and Newton polygons. As an application, we
tabulate Brumer-Stark units in narrow Hilbert class fields of real quadratic
fields with discriminants up to $10000$, for primes less than $20$, as well as
Stark-Heegner points on elliptic curves. | http://arxiv.org/abs/2301.08977v1 |
Cloud computing has revolutionized the way organizations manage their IT
infrastructure, but it has also introduced new challenges, such as managing
cloud costs. This paper explores various techniques for cloud cost
optimization, including cloud pricing, analysis, and strategies for resource
allocation. Real-world case studies of these techniques are presented, along
with a discussion of their effectiveness and key takeaways. The analysis
conducted in this paper reveals that organizations can achieve significant cost
savings by adopting cloud cost optimization techniques. Additionally, future
research directions are proposed to advance the state of the art in this
important field. | http://arxiv.org/abs/2307.12479v1 |
We investigate the identification of the time-dependent source term in the
diffusion equation using boundary measurements. This facilitates tracing back
the origins of environmental pollutants. Employing the concept of dynamic
complex geometrical optics (CGO) solutions, a variational formulation of the
inverse source problem is analyzed, leading to a proof of uniqueness result.
Our proposed two-step reconstruction algorithm first determines the point
source locations and subsequently reconstructs the Fourier components of the
emission concentration functions. Numerical experiments on simulated data are
conducted. The results demonstrate that the proposed two-step reconstruction
algorithm can reliably reconstruct multiple point sources and accurately
reconstruct the emission concentration functions. Additionally, by partitioning
the algorithm into online and offline computations, and concentrating
computational demand offline, real-time pollutant traceability becomes
feasible. This method, applicable in various fields - especially those related
to water pollution, can identify the source of a contaminant in the
environment, thus serving as a valuable tool in environmental protection. | http://arxiv.org/abs/2308.05958v2 |
Driven-dissipative condensates, such as those formed from polaritons, expose
how the coherence of Bose-Einstein condensates evolves far from equilibrium. We
consider the phase and frequency ordering in the steady-states of a
one-dimensional lattice of condensates, described by a coupled oscillator model
with non-odd couplings, and include both time-dependent noise and a static
random potential. We present numerical results for the phase and frequency
distributions, and discuss them in terms of the Kardar-Paraisi-Zhang equation
and the physics of spacetime vortices. We find that the nucleation of spacetime
vortices causes the breakdown of the single-frequency steady-state and produces
a variation in the frequency with position. Such variation would provide an
experimental signature of spacetime vortices. More generally, our results
expose the nature of sychronization in oscillator chains with non-odd
couplings, random frequencies, and noise. | http://arxiv.org/abs/2304.12129v2 |
The increasing complexity of modern deep neural network models and the
expanding sizes of datasets necessitate the development of optimized and
scalable training methods. In this white paper, we addressed the challenge of
efficiently training neural network models using sequences of varying sizes. To
address this challenge, we propose a novel training scheme that enables
efficient distributed data-parallel training on sequences of different sizes
with minimal overhead. By using this scheme we were able to reduce the padding
amount by more than 100$x$ while not deleting a single frame, resulting in an
overall increased performance on both training time and Recall in our
experiments. | http://arxiv.org/abs/2310.10879v2 |
Let $A$ be a commutative Noetherian ring of characteristic zero and $R=A[X_1,
\ldots, X_d]$ be a polynomial ring over $A$ with the standard
$\mathbb{N}^d$-grading. Let $I\subseteq R$ be an ideal which can be generated
by elements of the form $aU$ where $a \in A$ (possibly nonunit) and $U$ is a
monomial in $X_i$'s. We call such an ideal as a `$\mathfrak{C}$-monomial
ideal'. Local cohomology modules supported on monomial ideals gain a great deal
of interest due to their applications in the context of toric varieties. It was
observed that for $\underline{u} \in \mathbb{Z}^d$, their $\underline{u}^{th}$
components depend only on which coordinates of $\underline{u}$ are negative. In
this article, we show that this statement holds true in our general setting,
even for certain invariants of the components. We mainly focus on the Bass
numbers, injective dimensions, dimensions, associated primes, Bernstein-type
dimensions, and multiplicities of the components. Under the extra assumption
that $A$ is regular, we describe the finiteness of Bass numbers of each
component and bound its injective dimension by the dimension of its support.
Finally, we present a structure theorem for the components when $A$ is the ring
of formal power series in one variable over a characteristic zero field. | http://arxiv.org/abs/2307.03574v1 |
Given a topological dynamical system $(X,T)$, we study properties of the mean
orbital pseudo-metric $\bar E$ defined by \[ \bar E(x,y)= \limsup_{n\to\infty }
\min_{\sigma\in S_n}\frac{1}{n}\sum_{k=0}^{n-1}d(T^k(x),T^{\sigma(k)}(y)), \]
where $x,y\in X$ and $S_n$ is the permutation group of $\{0,1,\ldots,n-1\}$.
Let $\hat\omega_T(x)$ denote the set of measures quasi-generated by a point
$x\in X$. We show that the map $x\mapsto\hat\omega_T(x)$ is uniformly
continuous if $X$ is endowed with the pseudo-metric $\bar E$ and the space of
compact subsets of the set of invariant measures is considered with the
Hausdorff distance. We also obtain a new characterisation of $\bar
E$-continuity, which connects it to other properties studied in the literature,
like continuous pointwise ergodicity introduced by Downarowicz and Weiss.
Finally, we apply our results to reprove some known results on $\bar
E$-continuous and mean equicontinuous systems. | http://arxiv.org/abs/2303.11487v1 |
Despite significant progress having been made in question answering on
tabular data (Table QA), it's unclear whether, and to what extent existing
Table QA models are robust to task-specific perturbations, e.g., replacing key
question entities or shuffling table columns. To systematically study the
robustness of Table QA models, we propose a benchmark called RobuT, which
builds upon existing Table QA datasets (WTQ, WikiSQL-Weak, and SQA) and
includes human-annotated adversarial perturbations in terms of table header,
table content, and question. Our results indicate that both state-of-the-art
Table QA models and large language models (e.g., GPT-3) with few-shot learning
falter in these adversarial sets. We propose to address this problem by using
large language models to generate adversarial examples to enhance training,
which significantly improves the robustness of Table QA models. Our data and
code is publicly available at https://github.com/yilunzhao/RobuT. | http://arxiv.org/abs/2306.14321v1 |
This work is about uniform, plane, singly connected, strictly regular
Hall-plates with an arbitrary number of peripheral contacts exposed to a
uniform magnetic field of arbitrary strength. The strictly regular symmetry is
the highest possible degree of symmetry, and it is found in commercial
Hall-plates for magnetic field sensors or circulators. It means that all
contacts and contact spacings are equally large, if the Hall-plate is mapped
conformally to the unit disk. The indefinite conductance matrices of such
Hall-plates are circulant matrices, whose complex eigenvalues can be computed
in closed form. It is shown how to express the conductance and resistance
matrices of these Hall-plates, how to compute their equivalent resistor
circuit, their Hall-output voltages or currents, their signal-to-thermal noise
ratio, and their power as functions of the eigenvalues. It is also proven that
the noise efficiency of strictly regular Hall-plates with many contacts can be
up to 112% better than for conventional Hall-plates with four contacts, and it
is explained why their optimal biasing uses patterns of supply voltages or
currents, which vary sinusoidally along their boundary. | http://arxiv.org/abs/2304.01633v1 |
In this technical report, we describe the Guided-Attention mechanism based
solution for the short-term anticipation (STA) challenge for the EGO4D
challenge. It combines the object detections, and the spatiotemporal features
extracted from video clips, enhancing the motion and contextual information,
and further decoding the object-centric and motion-centric information to
address the problem of STA in egocentric videos. For the challenge, we build
our model on top of StillFast with Guided Attention applied on fast network.
Our model obtains better performance on the validation set and also achieves
state-of-the-art (SOTA) results on the challenge test set for EGO4D Short-Term
Object Interaction Anticipation Challenge. | http://arxiv.org/abs/2305.16066v3 |
Context: Combining high-contrast imaging with medium- or high-resolution
integral field spectroscopy has the potential to boost the detection rate of
exoplanets, especially at small angular separations. Furthermore, it
immediately provides a spectrum of the planet that can be used to characterise
its atmosphere. The achievable spectral resolution, wavelength coverage, and
FOV of such an instrument are limited by the number of available detector
pixels. Methods: The trade-offs are studied through end-to-end simulations of a
typical high-contrast imaging instrument, analytical considerations, and
atmospheric retrievals. The results are then validated with archival
VLT/SINFONI data of the planet beta Pictoris b. Results: We show that molecular
absorption spectra generally have decreasing power towards higher spectral
resolution and that molecule mapping is already powerful for moderate
resolutions (R>300). When choosing between wavelength coverage and spectral
resolution for a given number of spectral bins, it is best to first increase
the spectral resolution until R~2,000 and then maximise the bandwidth within an
observing band. We find that T-type companions are most easily detected in the
J/H band through methane and water features, while L-type companions are best
observed in the H/K band through water and CO features. Such an instrument does
not need to have a large FOV, as most of the gain in contrast is obtained in
the speckle-limited regime close to the star. We show that the same conclusions
are valid for the constraints on atmospheric parameters such as the C/O ratio,
metallicity, surface gravity, and temperature, while higher spectral resolution
(R~10,000) is required to constrain the radial velocity and spin of the planet. | http://arxiv.org/abs/2305.19355v1 |
Diffusion-based generative models have achieved remarkable success in various
domains. It trains a shared model on denoising tasks that encompass different
noise levels simultaneously, representing a form of multi-task learning (MTL).
However, analyzing and improving diffusion models from an MTL perspective
remains under-explored. In particular, MTL can sometimes lead to the well-known
phenomenon of negative transfer, which results in the performance degradation
of certain tasks due to conflicts between tasks. In this paper, we first aim to
analyze diffusion training from an MTL standpoint, presenting two key
observations: (O1) the task affinity between denoising tasks diminishes as the
gap between noise levels widens, and (O2) negative transfer can arise even in
diffusion training. Building upon these observations, we aim to enhance
diffusion training by mitigating negative transfer. To achieve this, we propose
leveraging existing MTL methods, but the presence of a huge number of denoising
tasks makes this computationally expensive to calculate the necessary per-task
loss or gradient. To address this challenge, we propose clustering the
denoising tasks into small task clusters and applying MTL methods to them.
Specifically, based on (O2), we employ interval clustering to enforce temporal
proximity among denoising tasks within clusters. We show that interval
clustering can be solved using dynamic programming, utilizing signal-to-noise
ratio, timestep, and task affinity for clustering objectives. Through this, our
approach addresses the issue of negative transfer in diffusion models by
allowing for efficient computation of MTL methods. We validate the efficacy of
proposed clustering and its integration with MTL methods through various
experiments, demonstrating 1) improved generation quality and 2) faster
training convergence of diffusion models. | http://arxiv.org/abs/2306.00354v3 |
This paper explores a relationship between invariants of certain group
actions and the time-reversibility of two-dimensional polynomial differential
systems exhibiting a $1:-1$ resonant singularity at the origin. We focus on the
connection of time-reversibility with the Sibirsky subvariety of the center
(integrability) variety, which encompasses systems possessing a local analytic
first integral near the origin. An algorithm for generating the Sibirsky ideal
for these systems is proposed and the algebraic properties of the ideal are
examined.
Furthermore, using a generalization of the concept of time-reversibility we
study $n$-dimensional systems with a $1:\zeta:\zeta^2:\dots:\zeta^{n-1}$
resonant singularity at the origin, where $n$ is prime and $\zeta$ is a
primitive $n$-th root of unity. We study the invariants of a Lie group action
on the parameter space of the system, leveraging the theory of binomial ideals
as a fundamental tool for the analysis. Our study reveals intriguing
connections between generalized reversibility, invariants, and binomial ideals,
shedding light on their complex interrelations. | http://arxiv.org/abs/2309.01817v3 |
We present a novel fully Bayesian analysis to constrain short gamma-ray burst
jet structures associated with cocoon, wide-angle and simple top-hat jet
models, as well as the binary neutron star merger rate. These constraints are
made given the distance and inclination information from GW170817, observed
flux of GRB170817A, observed rate of short gamma-ray bursts detected by Swift,
and the neutron star merger rate inferred from LIGO's first and second
observing runs. A separate analysis is conducted where a fitted short gamma-ray
burst luminosity function is included to provide further constraints. The jet
structure models are further constrained using the observation of GW190425 and
we find that the assumption that it produced a GRB170817-like short gamma-ray
burst that went undetected due to the jet geometry is consistent with previous
observations. We find and quantify evidence for low luminosity and wide-angled
jet structuring in the short gamma-ray burst population, independently from
afterglow observations, with log Bayes factors of $0.45{-}0.55$ for such models
when compared to a classical top-hat jet. Slight evidence is found for a
Gaussian jet structure model over all others when the fitted luminosity
function is provided, producing log Bayes factors of $0.25{-}0.9\pm0.05$ when
compared to the other models. However without considering GW190425 or the
fitted luminosity function, the evidence favours a cocoon-like model with log
Bayes factors of $0.14\pm0.05$ over the Gaussian jet structure. We provide new
constraints to the binary neutron star merger rates of
$1{-}1300$Gpc$^{-3}$yr$^{-1}$ or $2{-}680$Gpc$^{-3}$yr$^{-1}$ when a fitted
luminosity function is assumed. | http://arxiv.org/abs/2305.06275v2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.