title
string | abstract
string |
|---|---|
A semantic framework for preference handling in answer set programming
|
We provide a semantic framework for preference handling in answer set
programming. To this end, we introduce preference preserving consequence
operators. The resulting fixpoint characterizations provide us with a uniform
semantic framework for characterizing preference handling in existing
approaches. Although our approach is extensible to other semantics by means of
an alternating fixpoint theory, we focus here on the elaboration of preferences
under answer set semantics. Alternatively, we show how these approaches can be
characterized by the concept of order preservation. These uniform semantic
characterizations provide us with new insights about interrelationships and
moreover about ways of implementation.
|
Defeasible Logic Programming: An Argumentative Approach
|
The work reported here introduces Defeasible Logic Programming (DeLP), a
formalism that combines results of Logic Programming and Defeasible
Argumentation. DeLP provides the possibility of representing information in the
form of weak rules in a declarative manner, and a defeasible argumentation
inference mechanism for warranting the entailed conclusions.
In DeLP an argumentation formalism will be used for deciding between
contradictory goals. Queries will be supported by arguments that could be
defeated by other arguments. A query q will succeed when there is an argument A
for q that is warranted, ie, the argument A that supports q is found undefeated
by a warrant procedure that implements a dialectical analysis.
The defeasible argumentation basis of DeLP allows to build applications that
deal with incomplete and contradictory information in dynamic domains. Thus,
the resulting approach is suitable for representing agent's knowledge and for
providing an argumentation based reasoning mechanism to agents.
|
Constraint-based analysis of composite solvers
|
Cooperative constraint solving is an area of constraint programming that
studies the interaction between constraint solvers with the aim of discovering
the interaction patterns that amplify the positive qualities of individual
solvers. Automatisation and formalisation of such studies is an important issue
of cooperative constraint solving.
In this paper we present a constraint-based analysis of composite solvers
that integrates reasoning about the individual solvers and the processed data.
The idea is to approximate this reasoning by resolution of set constraints on
the finite sets representing the predicates that express all the necessary
properties. We illustrate application of our analysis to two important
cooperation patterns: deterministic choice and loop.
|
Kalman-filtering using local interactions
|
There is a growing interest in using Kalman-filter models for brain
modelling. In turn, it is of considerable importance to represent Kalman-filter
in connectionist forms with local Hebbian learning rules. To our best
knowledge, Kalman-filter has not been given such local representation. It seems
that the main obstacle is the dynamic adaptation of the Kalman-gain. Here, a
connectionist representation is presented, which is derived by means of the
recursive prediction error method. We show that this method gives rise to
attractive local learning rules and can adapt the Kalman-gain.
|
On the Notion of Cognition
|
We discuss philosophical issues concerning the notion of cognition basing
ourselves in experimental results in cognitive sciences, especially in computer
simulations of cognitive systems. There have been debates on the "proper"
approach for studying cognition, but we have realized that all approaches can
be in theory equivalent. Different approaches model different properties of
cognitive systems from different perspectives, so we can only learn from all of
them. We also integrate ideas from several perspectives for enhancing the
notion of cognition, such that it can contain other definitions of cognition as
special cases. This allows us to propose a simple classification of different
types of cognition.
|
Unfolding Partiality and Disjunctions in Stable Model Semantics
|
The paper studies an implementation methodology for partial and disjunctive
stable models where partiality and disjunctions are unfolded from a logic
program so that an implementation of stable models for normal
(disjunction-free) programs can be used as the core inference engine. The
unfolding is done in two separate steps. Firstly, it is shown that partial
stable models can be captured by total stable models using a simple linear and
modular program transformation. Hence, reasoning tasks concerning partial
stable models can be solved using an implementation of total stable models.
Disjunctive partial stable models have been lacking implementations which now
become available as the translation handles also the disjunctive case.
Secondly, it is shown how total stable models of disjunctive programs can be
determined by computing stable models for normal programs. Hence, an
implementation of stable models of normal programs can be used as a core engine
for implementing disjunctive programs. The feasibility of the approach is
demonstrated by constructing a system for computing stable models of
disjunctive programs using the smodels system as the core engine. The
performance of the resulting system is compared to that of dlv which is a
state-of-the-art special purpose system for disjunctive programs.
|
Multi-target particle filtering for the probability hypothesis density
|
When tracking a large number of targets, it is often computationally
expensive to represent the full joint distribution over target states. In cases
where the targets move independently, each target can instead be tracked with a
separate filter. However, this leads to a model-data association problem.
Another approach to solve the problem with computational complexity is to track
only the first moment of the joint distribution, the probability hypothesis
density (PHD). The integral of this distribution over any area S is the
expected number of targets within S. Since no record of object identity is
kept, the model-data association problem is avoided.
The contribution of this paper is a particle filter implementation of the PHD
filter mentioned above. This PHD particle filter is applied to tracking of
multiple vehicles in terrain, a non-linear tracking problem. Experiments show
that the filter can track a changing number of vehicles robustly, achieving
near-real-time performance.
|
A Framework for Searching AND/OR Graphs with Cycles
|
Search in cyclic AND/OR graphs was traditionally known to be an unsolved
problem. In the recent past several important studies have been reported in
this domain. In this paper, we have taken a fresh look at the problem. First, a
new and comprehensive theoretical framework for cyclic AND/OR graphs has been
presented, which was found missing in the recent literature. Based on this
framework, two best-first search algorithms, S1 and S2, have been developed. S1
does uninformed search and is a simple modification of the Bottom-up algorithm
by Martelli and Montanari. S2 performs a heuristically guided search and
replicates the modification in Bottom-up's successors, namely HS and AO*. Both
S1 and S2 solve the problem of searching AND/OR graphs in presence of cycles.
We then present a detailed analysis for the correctness and complexity results
of S1 and S2, using the proposed framework. We have observed through
experiments that S1 and S2 output correct results in all cases.
|
On rho in a Decision-Theoretic Apparatus of Dempster-Shafer Theory
|
Thomas M. Strat has developed a decision-theoretic apparatus for
Dempster-Shafer theory (Decision analysis using belief functions, Intern. J.
Approx. Reason. 4(5/6), 391-417, 1990). In this apparatus, expected utility
intervals are constructed for different choices. The choice with the highest
expected utility is preferable to others. However, to find the preferred choice
when the expected utility interval of one choice is included in that of
another, it is necessary to interpolate a discerning point in the intervals.
This is done by the parameter rho, defined as the probability that the
ambiguity about the utility of every nonsingleton focal element will turn out
as favorable as possible. If there are several different decision makers, we
might sometimes be more interested in having the highest expected utility among
the decision makers rather than only trying to maximize our own expected
utility regardless of choices made by other decision makers. The preference of
each choice is then determined by the probability of yielding the highest
expected utility. This probability is equal to the maximal interval length of
rho under which an alternative is preferred. We must here take into account not
only the choices already made by other decision makers but also the rational
choices we can assume to be made by later decision makers. In Strats apparatus,
an assumption, unwarranted by the evidence at hand, has to be made about the
value of rho. We demonstrate that no such assumption is necessary. It is
sufficient to assume a uniform probability distribution for rho to be able to
discern the most preferable choice. We discuss when this approach is
justifiable.
|
Updating beliefs with incomplete observations
|
Currently, there is renewed interest in the problem, raised by Shafer in
1985, of updating probabilities when observations are incomplete. This is a
fundamental problem in general, and of particular interest for Bayesian
networks. Recently, Grunwald and Halpern have shown that commonly used updating
strategies fail in this case, except under very special assumptions. In this
paper we propose a new method for updating probabilities with incomplete
observations. Our approach is deliberately conservative: we make no assumptions
about the so-called incompleteness mechanism that associates complete with
incomplete observations. We model our ignorance about this mechanism by a
vacuous lower prevision, a tool from the theory of imprecise probabilities, and
we use only coherence arguments to turn prior into posterior probabilities. In
general, this new approach to updating produces lower and upper posterior
probabilities and expectations, as well as partially determinate decisions.
This is a logical consequence of the existing ignorance about the
incompleteness mechanism. We apply the new approach to the problem of
classification of new evidence in probabilistic expert systems, where it leads
to a new, so-called conservative updating rule. In the special case of Bayesian
networks constructed using expert knowledge, we provide an exact algorithm for
classification based on our updating rule, which has linear-time complexity for
a class of networks wider than polytrees. This result is then extended to the
more general framework of credal networks, where computations are often much
harder than with Bayesian nets. Using an example, we show that our rule appears
to provide a solid basis for reliable updating with incomplete observations,
when no strong assumptions about the incompleteness mechanism are justified.
|
Updating Probabilities
|
As examples such as the Monty Hall puzzle show, applying conditioning to
update a probability distribution on a ``naive space'', which does not take
into account the protocol used, can often lead to counterintuitive results.
Here we examine why. A criterion known as CAR (``coarsening at random'') in the
statistical literature characterizes when ``naive'' conditioning in a naive
space works. We show that the CAR condition holds rather infrequently, and we
provide a procedural characterization of it, by giving a randomized algorithm
that generates all and only distributions for which CAR holds. This
substantially extends previous characterizations of CAR. We also consider more
generalized notions of update such as Jeffrey conditioning and minimizing
relative entropy (MRE). We give a generalization of the CAR condition that
characterizes when Jeffrey conditioning leads to appropriate answers, and show
that there exist some very simple settings in which MRE essentially never gives
the right results. This generalizes and interconnects previous results obtained
in the literature on CAR and MRE.
|
Pruning Isomorphic Structural Sub-problems in Configuration
|
Configuring consists in simulating the realization of a complex product from
a catalog of component parts, using known relations between types, and picking
values for object attributes. This highly combinatorial problem in the field of
constraint programming has been addressed with a variety of approaches since
the foundation system R1(McDermott82). An inherent difficulty in solving
configuration problems is the existence of many isomorphisms among
interpretations. We describe a formalism independent approach to improve the
detection of isomorphisms by configurators, which does not require to adapt the
problem model. To achieve this, we exploit the properties of a characteristic
subset of configuration problems, called the structural sub-problem, which
canonical solutions can be produced or tested at a limited cost. In this paper
we present an algorithm for testing the canonicity of configurations, that can
be added as a symmetry breaking constraint to any configurator. The cost and
efficiency of this canonicity test are given.
|
Probabilistic Reasoning as Information Compression by Multiple
Alignment, Unification and Search: An Introduction and Overview
|
This article introduces the idea that probabilistic reasoning (PR) may be
understood as "information compression by multiple alignment, unification and
search" (ICMAUS). In this context, multiple alignment has a meaning which is
similar to but distinct from its meaning in bio-informatics, while unification
means a simple merging of matching patterns, a meaning which is related to but
simpler than the meaning of that term in logic.
A software model, SP61, has been developed for the discovery and formation of
'good' multiple alignments, evaluated in terms of information compression. The
model is described in outline.
Using examples from the SP61 model, this article describes in outline how the
ICMAUS framework can model various kinds of PR including: PR in best-match
pattern recognition and information retrieval; one-step 'deductive' and
'abductive' PR; inheritance of attributes in a class hierarchy; chains of
reasoning (probabilistic decision networks and decision trees, and PR with
'rules'); geometric analogy problems; nonmonotonic reasoning and reasoning with
default values; modelling the function of a Bayesian network.
|
Information Compression by Multiple Alignment, Unification and Search as
a Unifying Principle in Computing and Cognition
|
This article presents an overview of the idea that "information compression
by multiple alignment, unification and search" (ICMAUS) may serve as a unifying
principle in computing (including mathematics and logic) and in such aspects of
human cognition as the analysis and production of natural language, fuzzy
pattern recognition and best-match information retrieval, concept hierarchies
with inheritance of attributes, probabilistic reasoning, and unsupervised
inductive learning. The ICMAUS concepts are described together with an outline
of the SP61 software model in which the ICMAUS concepts are currently realised.
A range of examples is presented, illustrated with output from the SP61 model.
|
Integrating cardinal direction relations and other orientation relations
in Qualitative Spatial Reasoning
|
We propose a calculus integrating two calculi well-known in Qualitative
Spatial Reasoning (QSR): Frank's projection-based cardinal direction calculus,
and a coarser version of Freksa's relative orientation calculus. An original
constraint propagation procedure is presented, which implements the interaction
between the two integrated calculi. The importance of taking into account the
interaction is shown with a real example providing an inconsistent knowledge
base, whose inconsistency (a) cannot be detected by reasoning separately about
each of the two components of the knowledge, just because, taken separately,
each is consistent, but (b) is detected by the proposed algorithm, thanks to
the interaction knowledge propagated from each of the two compnents to the
other.
|
A ternary Relation Algebra of directed lines
|
We define a ternary Relation Algebra (RA) of relative position relations on
two-dimensional directed lines (d-lines for short). A d-line has two degrees of
freedom (DFs): a rotational DF (RDF), and a translational DF (TDF). The
representation of the RDF of a d-line will be handled by an RA of 2D
orientations, CYC_t, known in the literature. A second algebra, TA_t, which
will handle the TDF of a d-line, will be defined. The two algebras, CYC_t and
TA_t, will constitute, respectively, the translational and the rotational
components of the RA, PA_t, of relative position relations on d-lines: the PA_t
atoms will consist of those pairs <t,r> of a TA_t atom and a CYC_t atom that
are compatible. We present in detail the RA PA_t, with its converse table, its
rotation table and its composition tables. We show that a (polynomial)
constraint propagation algorithm, known in the literature, is complete for a
subset of PA_t relations including almost all of the atomic relations. We will
discuss the application scope of the RA, which includes incidence geometry, GIS
(Geographic Information Systems), shape representation, localisation in
(multi-)robot navigation, and the representation of motion prepositions in NLP
(Natural Language Processing). We then compare the RA to existing ones, such as
an algebra for reasoning about rectangles parallel to the axes of an
(orthogonal) coordinate system, a ``spatial Odyssey'' of Allen's interval
algebra, and an algebra for reasoning about 2D segments.
|
From Statistical Knowledge Bases to Degrees of Belief
|
An intelligent agent will often be uncertain about various properties of its
environment, and when acting in that environment it will frequently need to
quantify its uncertainty. For example, if the agent wishes to employ the
expected-utility paradigm of decision theory to guide its actions, it will need
to assign degrees of belief (subjective probabilities) to various assertions.
Of course, these degrees of belief should not be arbitrary, but rather should
be based on the information available to the agent. This paper describes one
approach for inducing degrees of belief from very rich knowledge bases, that
can include information about particular individuals, statistical correlations,
physical laws, and default rules. We call our approach the random-worlds
method. The method is based on the principle of indifference: it treats all of
the worlds the agent considers possible as being equally likely. It is able to
integrate qualitative default reasoning with quantitative probabilistic
reasoning by providing a language in which both types of information can be
easily expressed. Our results show that a number of desiderata that arise in
direct inference (reasoning from statistical information to conclusions about
individuals) and default reasoning follow directly {from} the semantics of
random worlds. For example, random worlds captures important patterns of
reasoning such as specificity, inheritance, indifference to irrelevant
information, and default assumptions of independence. Furthermore, the
expressive power of the language used and the intuitive semantics of random
worlds allow the method to deal with problems that are beyond the scope of many
other non-deductive reasoning systems.
|
An Alternative to RDF-Based Languages for the Representation and
Processing of Ontologies in the Semantic Web
|
This paper describes an approach to the representation and processing of
ontologies in the Semantic Web, based on the ICMAUS theory of computation and
AI. This approach has strengths that complement those of languages based on the
Resource Description Framework (RDF) such as RDF Schema and DAML+OIL. The main
benefits of the ICMAUS approach are simplicity and comprehensibility in the
representation of ontologies, an ability to cope with errors and uncertainties
in knowledge, and a versatile reasoning system with capabilities in the kinds
of probabilistic reasoning that seem to be required in the Semantic Web.
|
Quantifying and Visualizing Attribute Interactions
|
Interactions are patterns between several attributes in data that cannot be
inferred from any subset of these attributes. While mutual information is a
well-established approach to evaluating the interactions between two
attributes, we surveyed its generalizations as to quantify interactions between
several attributes. We have chosen McGill's interaction information, which has
been independently rediscovered a number of times under various names in
various disciplines, because of its many intuitively appealing properties. We
apply interaction information to visually present the most important
interactions of the data. Visualization of interactions has provided insight
into the structure of data on a number of domains, identifying redundant
attributes and opportunities for constructing new features, discovering
unexpected regularities in data, and have helped during construction of
predictive models; we illustrate the methods on numerous examples. A machine
learning method that disregards interactions may get caught in two traps:
myopia is caused by learning algorithms assuming independence in spite of
interactions, whereas fragmentation arises from assuming an interaction in
spite of independence.
|
Evidential Force Aggregation
|
In this paper we develop an evidential force aggregation method intended for
classification of evidential intelligence into recognized force structures. We
assume that the intelligence has already been partitioned into clusters and use
the classification method individually in each cluster. The classification is
based on a measure of fitness between template and fused intelligence that
makes it possible to handle intelligence reports with multiple nonspecific and
uncertain propositions. With this measure we can aggregate on a level-by-level
basis, starting from general intelligence to achieve a complete force structure
with recognized units on all hierarchical levels.
|
Application of Kullback-Leibler Metric to Speech Recognition
|
Article discusses the application of Kullback-Leibler divergence to the
recognition of speech signals and suggests three algorithms implementing this
divergence criterion: correlation algorithm, spectral algorithm and filter
algorithm. Discussion covers an approach to the problem of speech variability
and is illustrated with the results of experimental modeling of speech signals.
The article gives a number of recommendations on the choice of appropriate
model parameters and provides a comparison to some other methods of speech
recognition.
|
The Algebra of Utility Inference
|
Richard Cox [1] set the axiomatic foundations of probable inference and the
algebra of propositions. He showed that consistency within these axioms
requires certain rules for updating belief. In this paper we use the analogy
between probability and utility introduced in [2] to propose an axiomatic
foundation for utility inference and the algebra of preferences. We show that
consistency within these axioms requires certain rules for updating preference.
We discuss a class of utility functions that stems from the axioms of utility
inference and show that this class is the basic building block for any general
multiattribute utility function. We use this class of utility functions
together with the algebra of preferences to construct utility functions
represented by logical operations on the attributes.
|
An information theory for preferences
|
Recent literature in the last Maximum Entropy workshop introduced an analogy
between cumulative probability distributions and normalized utility functions.
Based on this analogy, a utility density function can de defined as the
derivative of a normalized utility function. A utility density function is
non-negative and integrates to unity. These two properties form the basis of a
correspondence between utility and probability. A natural application of this
analogy is a maximum entropy principle to assign maximum entropy utility
values. Maximum entropy utility interprets many of the common utility functions
based on the preference information needed for their assignment, and helps
assign utility values based on partial preference information. This paper
reviews maximum entropy utility and introduces further results that stem from
the duality between probability and utility.
|
Abductive Logic Programs with Penalization: Semantics, Complexity and
Implementation
|
Abduction, first proposed in the setting of classical logics, has been
studied with growing interest in the logic programming area during the last
years.
In this paper we study abduction with penalization in the logic programming
framework. This form of abductive reasoning, which has not been previously
analyzed in logic programming, turns out to represent several relevant
problems, including optimization problems, very naturally. We define a formal
model for abduction with penalization over logic programs, which extends the
abductive framework proposed by Kakas and Mancarella. We address knowledge
representation issues, encoding a number of problems in our abductive
framework. In particular, we consider some relevant problems, taken from
different domains, ranging from optimization theory to diagnosis and planning;
their encodings turn out to be simple and elegant in our formalism. We
thoroughly analyze the computational complexity of the main problems arising in
the context of abduction with penalization from logic programs. Finally, we
implement a system supporting the proposed abductive framework on top of the
DLV engine. To this end, we design a translation from abduction problems with
penalties into logic programs with weak constraints. We prove that this
approach is sound and complete.
|
Local-search techniques for propositional logic extended with
cardinality constraints
|
We study local-search satisfiability solvers for propositional logic extended
with cardinality atoms, that is, expressions that provide explicit ways to
model constraints on cardinalities of sets. Adding cardinality atoms to the
language of propositional logic facilitates modeling search problems and often
results in concise encodings. We propose two ``native'' local-search solvers
for theories in the extended language. We also describe techniques to reduce
the problem to standard propositional satisfiability and allow us to use
off-the-shelf SAT solvers. We study these methods experimentally. Our general
finding is that native solvers designed specifically for the extended language
perform better than indirect methods relying on SAT solvers.
|
WSAT(cc) - a fast local-search ASP solver
|
We describe WSAT(cc), a local-search solver for computing models of theories
in the language of propositional logic extended by cardinality atoms. WSAT(cc)
is a processing back-end for the logic PS+, a recently proposed formalism for
answer-set programming.
|
Utility-Probability Duality
|
This paper presents duality between probability distributions and utility
functions.
|
Parametric Connectives in Disjunctive Logic Programming
|
Disjunctive Logic Programming (\DLP) is an advanced formalism for Knowledge
Representation and Reasoning (KRR). \DLP is very expressive in a precise
mathematical sense: it allows to express every property of finite structures
that is decidable in the complexity class $\SigmaP{2}$ ($\NP^{\NP}$).
Importantly, the \DLP encodings are often simple and natural.
In this paper, we single out some limitations of \DLP for KRR, which cannot
naturally express problems where the size of the disjunction is not known ``a
priori'' (like N-Coloring), but it is part of the input. To overcome these
limitations, we further enhance the knowledge modelling abilities of \DLP, by
extending this language by {\em Parametric Connectives (OR and AND)}. These
connectives allow us to represent compactly the disjunction/conjunction of a
set of atoms having a given property. We formally define the semantics of the
new language, named $DLP^{\bigvee,\bigwedge}$ and we show the usefulness of the
new constructs on relevant knowledge-based problems. We address implementation
issues and discuss related works.
|
Logic-Based Specification Languages for Intelligent Software Agents
|
The research field of Agent-Oriented Software Engineering (AOSE) aims to find
abstractions, languages, methodologies and toolkits for modeling, verifying,
validating and prototyping complex applications conceptualized as Multiagent
Systems (MASs). A very lively research sub-field studies how formal methods can
be used for AOSE. This paper presents a detailed survey of six logic-based
executable agent specification languages that have been chosen for their
potential to be integrated in our ARPEGGIO project, an open framework for
specifying and prototyping a MAS. The six languages are ConGoLog, Agent-0, the
IMPACT agent programming language, DyLog, Concurrent METATEM and Ehhf. For each
executable language, the logic foundations are described and an example of use
is shown. A comparison of the six languages and a survey of similar approaches
complete the paper, together with considerations of the advantages of using
logic-based languages in MAS modeling and prototyping.
|
Great Expectations. Part I: On the Customizability of Generalized
Expected Utility
|
We propose a generalization of expected utility that we call generalized EU
(GEU), where a decision maker's beliefs are represented by plausibility
measures, and the decision maker's tastes are represented by general (i.e.,not
necessarily real-valued) utility functions. We show that every agent,
``rational'' or not, can be modeled as a GEU maximizer. We then show that we
can customize GEU by selectively imposing just the constraints we want. In
particular, we show how each of Savage's postulates corresponds to constraints
on GEU.
|
Great Expectations. Part II: Generalized Expected Utility as a Universal
Decision Rule
|
Many different rules for decision making have been introduced in the
literature. We show that a notion of generalized expected utility proposed in
Part I of this paper is a universal decision rule, in the sense that it can
represent essentially all other decision rules.
|
Unsupervised Grammar Induction in a Framework of Information Compression
by Multiple Alignment, Unification and Search
|
This paper describes a novel approach to grammar induction that has been
developed within a framework designed to integrate learning with other aspects
of computing, AI, mathematics and logic. This framework, called "information
compression by multiple alignment, unification and search" (ICMAUS), is founded
on principles of Minimum Length Encoding pioneered by Solomonoff and others.
Most of the paper describes SP70, a computer model of the ICMAUS framework that
incorporates processes for unsupervised learning of grammars. An example is
presented to show how the model can infer a plausible grammar from appropriate
input. Limitations of the current model and how they may be overcome are
briefly discussed.
|
Integrating existing cone-shaped and projection-based cardinal direction
relations and a TCSP-like decidable generalisation
|
We consider the integration of existing cone-shaped and projection-based
calculi of cardinal direction relations, well-known in QSR. The more general,
integrating language we consider is based on convex constraints of the
qualitative form $r(x,y)$, $r$ being a cone-shaped or projection-based cardinal
direction atomic relation, or of the quantitative form $(\alpha ,\beta)(x,y)$,
with $\alpha ,\beta\in [0,2\pi)$ and $(\beta -\alpha)\in [0,\pi ]$: the meaning
of the quantitative constraint, in particular, is that point $x$ belongs to the
(convex) cone-shaped area rooted at $y$, and bounded by angles $\alpha$ and
$\beta$. The general form of a constraint is a disjunction of the form
$[r_1\vee...\vee r_{n_1}\vee (\alpha_1,\beta_1)\vee...\vee (\alpha
_{n_2},\beta_{n_2})](x,y)$, with $r_i(x,y)$, $i=1... n_1$, and $(\alpha
_i,\beta_i)(x,y)$, $i=1... n_2$, being convex constraints as described above:
the meaning of such a general constraint is that, for some $i=1... n_1$,
$r_i(x,y)$ holds, or, for some $i=1... n_2$, $(\alpha_i,\beta_i)(x,y)$ holds. A
conjunction of such general constraints is a $\tcsp$-like CSP, which we will
refer to as an $\scsp$ (Spatial Constraint Satisfaction Problem). An effective
solution search algorithm for an $\scsp$ will be described, which uses (1)
constraint propagation, based on a composition operation to be defined, as the
filtering method during the search, and (2) the Simplex algorithm, guaranteeing
completeness, at the leaves of the search tree. The approach is particularly
suited for large-scale high-level vision, such as, e.g., satellite-like
surveillance of a geographic area.
|
Modeling Object Oriented Constraint Programs in Z
|
Object oriented constraint programs (OOCPs) emerge as a leading evolution of
constraint programming and artificial intelligence, first applied to a range of
industrial applications called configuration problems. The rich variety of
technical approaches to solving configuration problems (CLP(FD), CC(FD), DCSP,
Terminological systems, constraint programs with set variables ...) is a source
of difficulty. No universally accepted formal language exists for communicating
about OOCPs, which makes the comparison of systems difficult. We present here a
Z based specification of OOCPs which avoids the falltrap of hidden object
semantics. The object system is part of the specification, and captures all of
the most advanced notions from the object oriented modeling standard UML. The
paper illustrates these issues and the conciseness and precision of Z by the
specification of a working OOCP that solves an historical AI problem : parsing
a context free grammar. Being written in Z, an OOCP specification also supports
formal proofs. The whole builds the foundation of an adaptative and evolving
framework for communicating about constrained object models and programs.
|
Diagnostic reasoning with A-Prolog
|
In this paper we suggest an architecture for a software agent which operates
a physical device and is capable of making observations and of testing and
repairing the device's components. We present simplified definitions of the
notions of symptom, candidate diagnosis, and diagnosis which are based on the
theory of action language ${\cal AL}$. The definitions allow one to give a
simple account of the agent's behavior in which many of the agent's tasks are
reduced to computing stable models of logic programs.
|
Weight Constraints as Nested Expressions
|
We compare two recent extensions of the answer set (stable model) semantics
of logic programs. One of them, due to Lifschitz, Tang and Turner, allows the
bodies and heads of rules to contain nested expressions. The other, due to
Niemela and Simons, uses weight constraints. We show that there is a simple,
modular translation from the language of weight constraints into the language
of nested expressions that preserves the program's answer sets. Nested
expressions can be eliminated from the result of this translation in favor of
additional atoms. The translation makes it possible to compute answer sets for
some programs with weight constraints using satisfiability solvers, and to
prove the strong equivalence of programs with weight constraints using the
logic of here-and there.
|
On the Expressibility of Stable Logic Programming
|
(We apologize for pidgin LaTeX) Schlipf \cite{sch91} proved that Stable Logic
Programming (SLP) solves all $\mathit{NP}$ decision problems. We extend
Schlipf's result to prove that SLP solves all search problems in the class
$\mathit{NP}$. Moreover, we do this in a uniform way as defined in \cite{mt99}.
Specifically, we show that there is a single $\mathrm{DATALOG}^{\neg}$ program
$P_{\mathit{Trg}}$ such that given any Turing machine $M$, any polynomial $p$
with non-negative integer coefficients and any input $\sigma$ of size $n$ over
a fixed alphabet $\Sigma$, there is an extensional database
$\mathit{edb}_{M,p,\sigma}$ such that there is a one-to-one correspondence
between the stable models of $\mathit{edb}_{M,p,\sigma} \cup P_{\mathit{Trg}}$
and the accepting computations of the machine $M$ that reach the final state in
at most $p(n)$ steps. Moreover, $\mathit{edb}_{M,p,\sigma}$ can be computed in
polynomial time from $p$, $\sigma$ and the description of $M$ and the decoding
of such accepting computations from its corresponding stable model of
$\mathit{edb}_{M,p,\sigma} \cup P_{\mathit{Trg}}$ can be computed in linear
time. A similar statement holds for Default Logic with respect to
$\Sigma_2^\mathrm{P}$-search problems\footnote{The proof of this result
involves additional technical complications and will be a subject of another
publication.}.
|
Unifying Computing and Cognition: The SP Theory and its Applications
|
This book develops the conjecture that all kinds of information processing in
computers and in brains may usefully be understood as "information compression
by multiple alignment, unification and search". This "SP theory", which has
been under development since 1987, provides a unified view of such things as
the workings of a universal Turing machine, the nature of 'knowledge', the
interpretation and production of natural language, pattern recognition and
best-match information retrieval, several kinds of probabilistic reasoning,
planning and problem solving, unsupervised learning, and a range of concepts in
mathematics and logic. The theory also provides a basis for the design of an
'SP' computer with several potential advantages compared with traditional
digital computers.
|
Recycling Computed Answers in Rewrite Systems for Abduction
|
In rule-based systems, goal-oriented computations correspond naturally to the
possible ways that an observation may be explained. In some applications, we
need to compute explanations for a series of observations with the same domain.
The question whether previously computed answers can be recycled arises. A yes
answer could result in substantial savings of repeated computations. For
systems based on classic logic, the answer is YES. For nonmonotonic systems
however, one tends to believe that the answer should be NO, since recycling is
a form of adding information. In this paper, we show that computed answers can
always be recycled, in a nontrivial way, for the class of rewrite procedures
that we proposed earlier for logic programs with negation. We present some
experimental results on an encoding of the logistics domain.
|
Memory As A Monadic Control Construct In Problem-Solving
|
Recent advances in programming languages study and design have established a
standard way of grounding computational systems representation in category
theory. These formal results led to a better understanding of issues of control
and side-effects in functional and imperative languages. This framework can be
successfully applied to the investigation of the performance of Artificial
Intelligence (AI) inference and cognitive systems. In this paper, we delineate
a categorical formalisation of memory as a control structure driving
performance in inference systems. Abstracting away control mechanisms from
three widely used representations of memory in cognitive systems (scripts,
production rules and clusters) we explain how categorical triples capture the
interaction between learning and problem-solving.
|
Integrating Defeasible Argumentation and Machine Learning Techniques
|
The field of machine learning (ML) is concerned with the question of how to
construct algorithms that automatically improve with experience. In recent
years many successful ML applications have been developed, such as datamining
programs, information-filtering systems, etc. Although ML algorithms allow the
detection and extraction of interesting patterns of data for several kinds of
problems, most of these algorithms are based on quantitative reasoning, as they
rely on training data in order to infer so-called target functions.
In the last years defeasible argumentation has proven to be a sound setting
to formalize common-sense qualitative reasoning. This approach can be combined
with other inference techniques, such as those provided by machine learning
theory.
In this paper we outline different alternatives for combining defeasible
argumentation and machine learning techniques. We suggest how different aspects
of a generic argument-based framework can be integrated with other ML-based
approaches.
|
Epistemic Foundation of Stable Model Semantics
|
Stable model semantics has become a very popular approach for the management
of negation in logic programming. This approach relies mainly on the closed
world assumption to complete the available knowledge and its formulation has
its basis in the so-called Gelfond-Lifschitz transformation.
The primary goal of this work is to present an alternative and
epistemic-based characterization of stable model semantics, to the
Gelfond-Lifschitz transformation. In particular, we show that stable model
semantics can be defined entirely as an extension of the Kripke-Kleene
semantics. Indeed, we show that the closed world assumption can be seen as an
additional source of `falsehood' to be added cumulatively to the Kripke-Kleene
semantics. Our approach is purely algebraic and can abstract from the
particular formalism of choice as it is based on monotone operators (under the
knowledge order) over bilattices only.
|
The role of behavior modifiers in representation development
|
We address the problem of the development of representations and their
relationship to the environment. We study a software agent which develops in a
network a representation of its simple environment which captures and
integrates the relationships between agent and environment through a closure
mechanism. The inclusion of a variable behavior modifier allows better
representation development. This can be confirmed with an internal description
of the closure mechanism, and with an external description of the properties of
the representation network.
|
Parametric external predicates for the DLV System
|
This document describes syntax, semantics and implementation guidelines in
order to enrich the DLV system with the possibility to make external C function
calls. This feature is realized by the introduction of parametric external
predicates, whose extension is not specified through a logic program but
implicitly computed through external code.
|
Toward the Implementation of Functions in the DLV System (Preliminary
Technical Report)
|
This document describes the functions as they are treated in the DLV system.
We give first the language, then specify the main implementation issues.
|
Knowledge And The Action Description Language A
|
We introduce Ak, an extension of the action description language A (Gelfond
and Lifschitz, 1993) to handle actions which affect knowledge. We use sensing
actions to increase an agent's knowledge of the world and non-deterministic
actions to remove knowledge. We include complex plans involving conditionals
and loops in our query language for hypothetical reasoning. We also present a
translation of Ak domain descriptions into epistemic logic programs.
|
A Comparative Study of Fuzzy Classification Methods on Breast Cancer
Data
|
In this paper, we examine the performance of four fuzzy rule generation
methods on Wisconsin breast cancer data. The first method generates fuzzy if
then rules using the mean and the standard deviation of attribute values. The
second approach generates fuzzy if then rules using the histogram of attributes
values. The third procedure generates fuzzy if then rules with certainty of
each attribute into homogeneous fuzzy sets. In the fourth approach, only
overlapping areas are partitioned. The first two approaches generate a single
fuzzy if then rule for each class by specifying the membership function of each
antecedent fuzzy set using the information about attribute values of training
patterns. The other two approaches are based on fuzzy grids with homogeneous
fuzzy partitions of each attribute. The performance of each approach is
evaluated on breast cancer data sets. Simulation results show that the Modified
grid approach has a high classification rate of 99.73 %.
|
Intelligent Systems: Architectures and Perspectives
|
The integration of different learning and adaptation techniques to overcome
individual limitations and to achieve synergetic effects through the
hybridization or fusion of these techniques has, in recent years, contributed
to a large number of new intelligent system designs. Computational intelligence
is an innovative framework for constructing intelligent hybrid architectures
involving Neural Networks (NN), Fuzzy Inference Systems (FIS), Probabilistic
Reasoning (PR) and derivative free optimization techniques such as Evolutionary
Computation (EC). Most of these hybridization approaches, however, follow an ad
hoc design methodology, justified by success in certain application domains.
Due to the lack of a common framework it often remains difficult to compare the
various hybrid systems conceptually and to evaluate their performance
comparatively. This chapter introduces the different generic architectures for
integrating intelligent systems. The designing aspects and perspectives of
different hybrid archirectures like NN-FIS, EC-FIS, EC-NN, FIS-PR and NN-FIS-EC
systems are presented. Some conclusions are also provided towards the end.
|
A Neuro-Fuzzy Approach for Modelling Electricity Demand in Victoria
|
Neuro-fuzzy systems have attracted growing interest of researchers in various
scientific and engineering areas due to the increasing need of intelligent
systems. This paper evaluates the use of two popular soft computing techniques
and conventional statistical approach based on Box--Jenkins autoregressive
integrated moving average (ARIMA) model to predict electricity demand in the
State of Victoria, Australia. The soft computing methods considered are an
evolving fuzzy neural network (EFuNN) and an artificial neural network (ANN)
trained using scaled conjugate gradient algorithm (CGA) and backpropagation
(BP) algorithm. The forecast accuracy is compared with the forecasts used by
Victorian Power Exchange (VPX) and the actual energy demand. To evaluate, we
considered load demand patterns for 10 consecutive months taken every 30 min
for training the different prediction models. Test results show that the
neuro-fuzzy system performed better than neural networks, ARIMA model and the
VPX forecasts.
|
Neuro Fuzzy Systems: Sate-of-the-Art Modeling Techniques
|
Fusion of Artificial Neural Networks (ANN) and Fuzzy Inference Systems (FIS)
have attracted the growing interest of researchers in various scientific and
engineering areas due to the growing need of adaptive intelligent systems to
solve the real world problems. ANN learns from scratch by adjusting the
interconnections between layers. FIS is a popular computing framework based on
the concept of fuzzy set theory, fuzzy if-then rules, and fuzzy reasoning. The
advantages of a combination of ANN and FIS are obvious. There are several
approaches to integrate ANN and FIS and very often it depends on the
application. We broadly classify the integration of ANN and FIS into three
categories namely concurrent model, cooperative model and fully fused model.
This paper starts with a discussion of the features of each model and
generalize the advantages and deficiencies of each model. We further focus the
review on the different types of fused neuro-fuzzy systems and citing the
advantages and disadvantages of each model.
|
Is Neural Network a Reliable Forecaster on Earth? A MARS Query!
|
Long-term rainfall prediction is a challenging task especially in the modern
world where we are facing the major environmental problem of global warming. In
general, climate and rainfall are highly non-linear phenomena in nature
exhibiting what is known as the butterfly effect. While some regions of the
world are noticing a systematic decrease in annual rainfall, others notice
increases in flooding and severe storms. The global nature of this phenomenon
is very complicated and requires sophisticated computer modeling and simulation
to predict accurately. In this paper, we report a performance analysis for
Multivariate Adaptive Regression Splines (MARS)and artificial neural networks
for one month ahead prediction of rainfall. To evaluate the prediction
efficiency, we made use of 87 years of rainfall data in Kerala state, the
southern part of the Indian peninsula situated at latitude -longitude pairs
(8o29'N - 76o57' E). We used an artificial neural network trained using the
scaled conjugate gradient algorithm. The neural network and MARS were trained
with 40 years of rainfall data. For performance evaluation, network predicted
outputs were compared with the actual rainfall data. Simulation results reveal
that MARS is a good forecasting tool and performed better than the considered
neural network.
|
DCT Based Texture Classification Using Soft Computing Approach
|
Classification of texture pattern is one of the most important problems in
pattern recognition. In this paper, we present a classification method based on
the Discrete Cosine Transform (DCT) coefficients of texture image. As DCT works
on gray level image, the color scheme of each image is transformed into gray
levels. For classifying the images using DCT we used two popular soft computing
techniques namely neurocomputing and neuro-fuzzy computing. We used a
feedforward neural network trained using the backpropagation learning and an
evolving fuzzy neural network to classify the textures. The soft computing
models were trained using 80% of the texture data and remaining was used for
testing and validation purposes. A performance comparison was made among the
soft computing models for the texture classification problem. We also analyzed
the effects of prolonged training of neural networks. It is observed that the
proposed neuro-fuzzy model performed better than neural network.
|
Estimating Genome Reversal Distance by Genetic Algorithm
|
Sorting by reversals is an important problem in inferring the evolutionary
relationship between two genomes. The problem of sorting unsigned permutation
has been proven to be NP-hard. The best guaranteed error bounded is the 3/2-
approximation algorithm. However, the problem of sorting signed permutation can
be solved easily. Fast algorithms have been developed both for finding the
sorting sequence and finding the reversal distance of signed permutation. In
this paper, we present a way to view the problem of sorting unsigned
permutation as signed permutation. And the problem can then be seen as
searching an optimal signed permutation in all n2 corresponding signed
permutations. We use genetic algorithm to conduct the search. Our experimental
result shows that the proposed method outperform the 3/2-approximation
algorithm.
|
Intrusion Detection Systems Using Adaptive Regression Splines
|
Past few years have witnessed a growing recognition of intelligent techniques
for the construction of efficient and reliable intrusion detection systems. Due
to increasing incidents of cyber attacks, building effective intrusion
detection systems (IDS) are essential for protecting information systems
security, and yet it remains an elusive goal and a great challenge. In this
paper, we report a performance analysis between Multivariate Adaptive
Regression Splines (MARS), neural networks and support vector machines. The
MARS procedure builds flexible regression models by fitting separate splines to
distinct intervals of the predictor variables. A brief comparison of different
neural network learning algorithms is also given.
|
Data Mining Approach for Analyzing Call Center Performance
|
The aim of our research was to apply well-known data mining techniques (such
as linear neural networks, multi-layered perceptrons, probabilistic neural
networks, classification and regression trees, support vector machines and
finally a hybrid decision tree neural network approach) to the problem of
predicting the quality of service in call centers; based on the performance
data actually collected in a call center of a large insurance company. Our aim
was two-fold. First, to compare the performance of models built using the
above-mentioned techniques and, second, to analyze the characteristics of the
input sensitivity in order to better understand the relationship between the
perform-ance evaluation process and the actual performance and in this way help
improve the performance of call centers. In this paper we summarize our
findings.
|
Modeling Chaotic Behavior of Stock Indices Using Intelligent Paradigms
|
The use of intelligent systems for stock market predictions has been widely
established. In this paper, we investigate how the seemingly chaotic behavior
of stock markets could be well represented using several connectionist
paradigms and soft computing techniques. To demonstrate the different
techniques, we considered Nasdaq-100 index of Nasdaq Stock MarketS and the S&P
CNX NIFTY stock index. We analyzed 7 year's Nasdaq 100 main index values and 4
year's NIFTY index values. This paper investigates the development of a
reliable and efficient technique to model the seemingly chaotic behavior of
stock markets. We considered an artificial neural network trained using
Levenberg-Marquardt algorithm, Support Vector Machine (SVM), Takagi-Sugeno
neuro-fuzzy model and a Difference Boosting Neural Network (DBNN). This paper
briefly explains how the different connectionist paradigms could be formulated
using different learning methods and then investigates whether they can provide
the required level of performance, which are sufficiently good and robust so as
to provide a reliable forecast model for stock market indices. Experiment
results reveal that all the connectionist paradigms considered could represent
the stock indices behavior very accurately.
|
Hybrid Fuzzy-Linear Programming Approach for Multi Criteria Decision
Making Problems
|
The purpose of this paper is to point to the usefulness of applying a linear
mathematical formulation of fuzzy multiple criteria objective decision methods
in organising business activities. In this respect fuzzy parameters of linear
programming are modelled by preference-based membership functions. This paper
begins with an introduction and some related research followed by some
fundamentals of fuzzy set theory and technical concepts of fuzzy multiple
objective decision models. Further a real case study of a manufacturing plant
and the implementation of the proposed technique is presented. Empirical
results clearly show the superiority of the fuzzy technique in optimising
individual objective functions when compared to non-fuzzy approach.
Furthermore, for the problem considered, the optimal solution helps to infer
that by incorporating fuzziness in a linear programming model either in
constraints, or both in objective functions and constraints, provides a similar
(or even better) level of satisfaction for obtained results compared to
non-fuzzy linear programming.
|
Meta-Learning Evolutionary Artificial Neural Networks
|
In this paper, we present MLEANN (Meta-Learning Evolutionary Artificial
Neural Network), an automatic computational framework for the adaptive
optimization of artificial neural networks wherein the neural network
architecture, activation function, connection weights; learning algorithm and
its parameters are adapted according to the problem. We explored the
performance of MLEANN and conventionally designed artificial neural networks
for function approximation problems. To evaluate the comparative performance,
we used three different well-known chaotic time series. We also present the
state of the art popular neural network learning algorithms and some
experimentation results related to convergence speed and generalization
performance. We explored the performance of backpropagation algorithm;
conjugate gradient algorithm, quasi-Newton algorithm and Levenberg-Marquardt
algorithm for the three chaotic time series. Performances of the different
learning algorithms were evaluated when the activation functions and
architecture were changed. We further present the theoretical background,
algorithm, design strategy and further demonstrate how effective and inevitable
is the proposed MLEANN framework to design a neural network, which is smaller,
faster and with a better generalization performance.
|
The Largest Compatible Subset Problem for Phylogenetic Data
|
The phylogenetic tree construction is to infer the evolutionary relationship
between species from the experimental data. However, the experimental data are
often imperfect and conflicting each others. Therefore, it is important to
extract the motif from the imperfect data. The largest compatible subset
problem is that, given a set of experimental data, we want to discard the
minimum such that the remaining is compatible. The largest compatible subset
problem can be viewed as the vertex cover problem in the graph theory that has
been proven to be NP-hard. In this paper, we propose a hybrid Evolutionary
Computing (EC) method for this problem. The proposed method combines the EC
approach and the algorithmic approach for special structured graphs. As a
result, the complexity of the problem is dramatically reduced. Experiments were
performed on randomly generated graphs with different edge densities. The
vertex covers produced by the proposed method were then compared to the vertex
covers produced by a 2-approximation algorithm. The experimental results showed
that the proposed method consistently outperformed a classical 2- approximation
algorithm. Furthermore, a significant improvement was found when the graph
density was small.
|
A Concurrent Fuzzy-Neural Network Approach for Decision Support Systems
|
Decision-making is a process of choosing among alternative courses of action
for solving complicated problems where multi-criteria objectives are involved.
The past few years have witnessed a growing recognition of Soft Computing
technologies that underlie the conception, design and utilization of
intelligent systems. Several works have been done where engineers and
scientists have applied intelligent techniques and heuristics to obtain optimal
decisions from imprecise information. In this paper, we present a concurrent
fuzzy-neural network approach combining unsupervised and supervised learning
techniques to develop the Tactical Air Combat Decision Support System (TACDSS).
Experiment results clearly demonstrate the efficiency of the proposed
technique.
|
Analysis of Hybrid Soft and Hard Computing Techniques for Forex
Monitoring Systems
|
In a universe with a single currency, there would be no foreign exchange
market, no foreign exchange rates, and no foreign exchange. Over the past
twenty-five years, the way the market has performed those tasks has changed
enormously. The need for intelligent monitoring systems has become a necessity
to keep track of the complex forex market. The vast currency market is a
foreign concept to the average individual. However, once it is broken down into
simple terms, the average individual can begin to understand the foreign
exchange market and use it as a financial instrument for future investing. In
this paper, we attempt to compare the performance of hybrid soft computing and
hard computing techniques to predict the average monthly forex rates one month
ahead. The soft computing models considered are a neural network trained by the
scaled conjugate gradient algorithm and a neuro-fuzzy model implementing a
Takagi-Sugeno fuzzy inference system. We also considered Multivariate Adaptive
Regression Splines (MARS), Classification and Regression Trees (CART) and a
hybrid CART-MARS technique. We considered the exchange rates of Australian
dollar with respect to US dollar, Singapore dollar, New Zealand dollar,
Japanese yen and United Kingdom pounds. The models were trained using 70% of
the data and remaining was used for testing and validation purposes. It is
observed that the proposed hybrid models could predict the forex rates more
accurately than all the techniques when applied individually. Empirical results
also reveal that the hybrid hard computing approach also improved some of our
previous work using a neuro-fuzzy approach.
|
Business Intelligence from Web Usage Mining
|
The rapid e-commerce growth has made both business community and customers
face a new situation. Due to intense competition on one hand and the customer's
option to choose from several alternatives business community has realized the
necessity of intelligent marketing strategies and relationship management. Web
usage mining attempts to discover useful knowledge from the secondary data
obtained from the interactions of the users with the Web. Web usage mining has
become very critical for effective Web site management, creating adaptive Web
sites, business and support services, personalization, network traffic flow
analysis and so on. In this paper, we present the important concepts of Web
usage mining and its various practical applications. We further present a novel
approach 'intelligent-miner' (i-Miner) to optimize the concurrent architecture
of a fuzzy clustering algorithm (to discover web data clusters) and a fuzzy
inference system to analyze the Web site visitor trends. A hybrid evolutionary
fuzzy clustering algorithm is proposed in this paper to optimally segregate
similar user interests. The clustered data is then used to analyze the trends
using a Takagi-Sugeno fuzzy inference system learned using a combination of
evolutionary algorithm and neural network learning. Proposed approach is
compared with self-organizing maps (to discover patterns) and several function
approximation techniques like neural networks, linear genetic programming and
Takagi-Sugeno fuzzy inference system (to analyze the clusters). The results are
graphically illustrated and the practical significance is discussed in detail.
Empirical results clearly show that the proposed Web usage-mining framework is
efficient.
|
Adaptation of Mamdani Fuzzy Inference System Using Neuro - Genetic
Approach for Tactical Air Combat Decision Support System
|
Normally a decision support system is build to solve problem where
multi-criteria decisions are involved. The knowledge base is the vital part of
the decision support containing the information or data that is used in
decision-making process. This is the field where engineers and scientists have
applied several intelligent techniques and heuristics to obtain optimal
decisions from imprecise information. In this paper, we present a hybrid
neuro-genetic learning approach for the adaptation a Mamdani fuzzy inference
system for the Tactical Air Combat Decision Support System (TACDSS). Some
simulation results demonstrating the difference of the learning techniques and
are also provided.
|
EvoNF: A Framework for Optimization of Fuzzy Inference Systems Using
Neural Network Learning and Evolutionary Computation
|
Several adaptation techniques have been investigated to optimize fuzzy
inference systems. Neural network learning algorithms have been used to
determine the parameters of fuzzy inference system. Such models are often
called as integrated neuro-fuzzy models. In an integrated neuro-fuzzy model
there is no guarantee that the neural network learning algorithm converges and
the tuning of fuzzy inference system will be successful. Success of
evolutionary search procedures for optimization of fuzzy inference system is
well proven and established in many application areas. In this paper, we will
explore how the optimization of fuzzy inference systems could be further
improved using a meta-heuristic approach combining neural network learning and
evolutionary computation. The proposed technique could be considered as a
methodology to integrate neural networks, fuzzy inference systems and
evolutionary search procedures. We present the theoretical frameworks and some
experimental results to demonstrate the efficiency of the proposed technique.
|
Optimization of Evolutionary Neural Networks Using Hybrid Learning
Algorithms
|
Evolutionary artificial neural networks (EANNs) refer to a special class of
artificial neural networks (ANNs) in which evolution is another fundamental
form of adaptation in addition to learning. Evolutionary algorithms are used to
adapt the connection weights, network architecture and learning algorithms
according to the problem environment. Even though evolutionary algorithms are
well known as efficient global search algorithms, very often they miss the best
local solutions in the complex solution space. In this paper, we propose a
hybrid meta-heuristic learning approach combining evolutionary learning and
local search methods (using 1st and 2nd order error information) to improve the
learning and faster convergence obtained using a direct evolutionary approach.
The proposed technique is tested on three different chaotic time series and the
test results are compared with some popular neuro-fuzzy systems and a recently
developed cutting angle method of global optimization. Empirical results reveal
that the proposed technique is efficient in spite of the computational
complexity.
|
Export Behaviour Modeling Using EvoNF Approach
|
The academic literature suggests that the extent of exporting by
multinational corporation subsidiaries (MCS) depends on their product
manufactured, resources, tax protection, customers and markets, involvement
strategy, financial independence and suppliers' relationship with a
multinational corporation (MNC). The aim of this paper is to model the complex
export pattern behaviour using a Takagi-Sugeno fuzzy inference system in order
to determine the actual volume of MCS export output (sales exported). The
proposed fuzzy inference system is optimised by using neural network learning
and evolutionary computation. Empirical results clearly show that the proposed
approach could model the export behaviour reasonable well compared to a direct
neural network approach.
|
Traffic Accident Analysis Using Decision Trees and Neural Networks
|
The costs of fatalities and injuries due to traffic accident have a great
impact on society. This paper presents our research to model the severity of
injury resulting from traffic accidents using artificial neural networks and
decision trees. We have applied them to an actual data set obtained from the
National Automotive Sampling System (NASS) General Estimates System (GES).
Experiment results reveal that in all the cases the decision tree outperforms
the neural network. Our research analysis also shows that the three most
important factors in fatal injury are: driver's seat belt usage, light
condition of the roadway, and driver's alcohol usage.
|
Short Term Load Forecasting Models in Czech Republic Using Soft
Computing Paradigms
|
This paper presents a comparative study of six soft computing models namely
multilayer perceptron networks, Elman recurrent neural network, radial basis
function network, Hopfield model, fuzzy inference system and hybrid fuzzy
neural network for the hourly electricity demand forecast of Czech Republic.
The soft computing models were trained and tested using the actual hourly load
data for seven years. A comparison of the proposed techniques is presented for
predicting 2 day ahead demands for electricity. Simulation results indicate
that hybrid fuzzy neural network and radial basis function networks are the
best candidates for the analysis and forecasting of electricity demand.
|
Decision Support Systems Using Intelligent Paradigms
|
Decision-making is a process of choosing among alternative courses of action
for solving complicated problems where multi-criteria objectives are involved.
The past few years have witnessed a growing recognition of Soft Computing (SC)
technologies that underlie the conception, design and utilization of
intelligent systems. In this paper, we present different SC paradigms involving
an artificial neural network trained using the scaled conjugate gradient
algorithm, two different fuzzy inference methods optimised using neural network
learning/evolutionary algorithms and regression trees for developing
intelligent decision support systems. We demonstrate the efficiency of the
different algorithms by developing a decision support system for a Tactical Air
Combat Environment (TACE). Some empirical comparisons between the different
algorithms are also provided.
|
Regression with respect to sensing actions and partial states
|
In this paper, we present a state-based regression function for planning
domains where an agent does not have complete information and may have sensing
actions. We consider binary domains and employ the 0-approximation [Son & Baral
2001] to define the regression function. In binary domains, the use of
0-approximation means using 3-valued states. Although planning using this
approach is incomplete with respect to the full semantics, we adopt it to have
a lower complexity. We prove the soundness and completeness of our regression
formulation with respect to the definition of progression. More specifically,
we show that (i) a plan obtained through regression for a planning problem is
indeed a progression solution of that planning problem, and that (ii) for each
plan found through progression, using regression one obtains that plan or an
equivalent one. We then develop a conditional planner that utilizes our
regression function. We prove the soundness and completeness of our planning
algorithm and present experimental results with respect to several well known
planning problems in the literature.
|
Propositional Defeasible Logic has Linear Complexity
|
Defeasible logic is a rule-based nonmonotonic logic, with both strict and
defeasible rules, and a priority relation on rules. We show that inference in
the propositional form of the logic can be performed in linear time. This
contrasts markedly with most other propositional nonmonotonic logics, in which
inference is intractable.
|
Pruning Search Space in Defeasible Argumentation
|
Defeasible argumentation has experienced a considerable growth in AI in the
last decade. Theoretical results have been combined with development of
practical applications in AI & Law, Case-Based Reasoning and various
knowledge-based systems. However, the dialectical process associated with
inference is computationally expensive. This paper focuses on speeding up this
inference process by pruning the involved search space. Our approach is
twofold. On one hand, we identify distinguished literals for computing defeat.
On the other hand, we restrict ourselves to a subset of all possible
conflicting arguments by introducing dialectical constraints.
|
A proposal to design expert system for the calculations in the domain of
QFT
|
Main purposes of the paper are followings: 1) To show examples of the
calculations in domain of QFT via ``derivative rules'' of an expert system; 2)
To consider advantages and disadvantage that technology of the calculations; 3)
To reflect about how one would develop new physical theories, what knowledge
would be useful in their investigations and how this problem can be connected
with designing an expert system.
|
A New Approach to Draw Detection by Move Repetition in Computer Chess
Programming
|
We will try to tackle both the theoretical and practical aspects of a very
important problem in chess programming as stated in the title of this article -
the issue of draw detection by move repetition. The standard approach that has
so far been employed in most chess programs is based on utilising positional
matrices in original and compressed format as well as on the implementation of
the so-called bitboard format.
The new approach that we will be trying to introduce is based on using
variant strings generated by the search algorithm (searcher) during the tree
expansion in decision making. We hope to prove that this approach is more
efficient than the standard treatment of the issue, especially in positions
with few pieces (endgames). To illustrate what we have in mind a machine
language routine that implements our theoretical assumptions is attached. The
routine is part of the Axon chess program, developed by the authors. Axon, in
its current incarnation, plays chess at master strength (ca. 2400-2450 Elo,
based on both Axon vs computer programs and Axon vs human masters in over 3000
games altogether).
|
Autogenic Training With Natural Language Processing Modules: A Recent
Tool For Certain Neuro Cognitive Studies
|
Learning to respond to voice-text input involves the subject's ability in
understanding the phonetic and text based contents and his/her ability to
communicate based on his/her experience. The neuro-cognitive facility of the
subject has to support two important domains in order to make the learning
process complete. In many cases, though the understanding is complete, the
response is partial. This is one valid reason why we need to support the
information from the subject with scalable techniques such as Natural Language
Processing (NLP) for abstraction of the contents from the output. This paper
explores the feasibility of using NLP modules interlaced with Neural Networks
to perform the required task in autogenic training related to medical
applications.
|
Generalized Evolutionary Algorithm based on Tsallis Statistics
|
Generalized evolutionary algorithm based on Tsallis canonical distribution is
proposed. The algorithm uses Tsallis generalized canonical distribution to
weigh the configurations for `selection' instead of Gibbs-Boltzmann
distribution. Our simulation results show that for an appropriate choice of
non-extensive index that is offered by Tsallis statistics, evolutionary
algorithms based on this generalization outperform algorithms based on
Gibbs-Boltzmann distribution.
|
Decomposition Based Search - A theoretical and experimental evaluation
|
In this paper we present and evaluate a search strategy called Decomposition
Based Search (DBS) which is based on two steps: subproblem generation and
subproblem solution. The generation of subproblems is done through value
ranking and domain splitting. Subdomains are explored so as to generate,
according to the heuristic chosen, promising subproblems first.
We show that two well known search strategies, Limited Discrepancy Search
(LDS) and Iterative Broadening (IB), can be seen as special cases of DBS. First
we present a tuning of DBS that visits the same search nodes as IB, but avoids
restarts. Then we compare both theoretically and computationally DBS and LDS
using the same heuristic. We prove that DBS has a higher probability of being
successful than LDS on a comparable number of nodes, under realistic
assumptions. Experiments on a constraint satisfaction problem and an
optimization problem show that DBS is indeed very effective if compared to LDS.
|
Postponing Branching Decisions
|
Solution techniques for Constraint Satisfaction and Optimisation Problems
often make use of backtrack search methods, exploiting variable and value
ordering heuristics. In this paper, we propose and analyse a very simple method
to apply in case the value ordering heuristic produces ties: postponing the
branching decision. To this end, we group together values in a tie, branch on
this sub-domain, and defer the decision among them to lower levels of the
search tree. We show theoretically and experimentally that this simple
modification can dramatically improve the efficiency of the search strategy.
Although in practise similar methods may have been applied already, to our
knowledge, no empirical or theoretical study has been proposed in the
literature to identify when and to what extent this strategy should be used.
|
Reduced cost-based ranking for generating promising subproblems
|
In this paper, we propose an effective search procedure that interleaves two
steps: subproblem generation and subproblem solution. We mainly focus on the
first part. It consists of a variable domain value ranking based on reduced
costs. Exploiting the ranking, we generate, in a Limited Discrepancy Search
tree, the most promising subproblems first. An interesting result is that
reduced costs provide a very precise ranking that allows to almost always find
the optimal solution in the first generated subproblem, even if its dimension
is significantly smaller than that of the original problem. Concerning the
proof of optimality, we exploit a way to increase the lower bound for
subproblems at higher discrepancies. We show experimental results on the TSP
and its time constrained variant to show the effectiveness of the proposed
approach, but the technique could be generalized for other problems.
|
A Simple Proportional Conflict Redistribution Rule
|
One proposes a first alternative rule of combination to WAO (Weighted Average
Operator) proposed recently by Josang, Daniel and Vannoorenberghe, called
Proportional Conflict Redistribution rule (denoted PCR1). PCR1 and WAO are
particular cases of WO (the Weighted Operator) because the conflicting mass is
redistributed with respect to some weighting factors. In this first PCR rule,
the proportionalization is done for each non-empty set with respect to the
non-zero sum of its corresponding mass matrix - instead of its mass column
average as in WAO, but the results are the same as Ph. Smets has pointed out.
Also, we extend WAO (which herein gives no solution) for the degenerate case
when all column sums of all non-empty sets are zero, and then the conflicting
mass is transferred to the non-empty disjunctive form of all non-empty sets
together; but if this disjunctive form happens to be empty, then one considers
an open world (i.e. the frame of discernment might contain new hypotheses) and
thus all conflicting mass is transferred to the empty set. In addition to WAO,
we propose a general formula for PCR1 (WAO for non-degenerate cases).
|
An Algorithm for Quasi-Associative and Quasi-Markovian Rules of
Combination in Information Fusion
|
In this paper one proposes a simple algorithm of combining the fusion rules,
those rules which first use the conjunctive rule and then the transfer of
conflicting mass to the non-empty sets, in such a way that they gain the
property of associativity and fulfill the Markovian requirement for dynamic
fusion. Also, a new rule, SDL-improved, is presented.
|
FLUX: A Logic Programming Method for Reasoning Agents
|
FLUX is a programming method for the design of agents that reason logically
about their actions and sensor information in the presence of incomplete
knowledge. The core of FLUX is a system of Constraint Handling Rules, which
enables agents to maintain an internal model of their environment by which they
control their own behavior. The general action representation formalism of the
fluent calculus provides the formal semantics for the constraint solver. FLUX
exhibits excellent computational behavior due to both a carefully restricted
expressiveness and the inference paradigm of progression.
|
Cauchy Annealing Schedule: An Annealing Schedule for Boltzmann Selection
Scheme in Evolutionary Algorithms
|
Boltzmann selection is an important selection mechanism in evolutionary
algorithms as it has theoretical properties which help in theoretical analysis.
However, Boltzmann selection is not used in practice because a good annealing
schedule for the `inverse temperature' parameter is lacking. In this paper we
propose a Cauchy annealing schedule for Boltzmann selection scheme based on a
hypothesis that selection-strength should increase as evolutionary process goes
on and distance between two selection strengths should decrease for the process
to converge. To formalize these aspects, we develop formalism for selection
mechanisms using fitness distributions and give an appropriate measure for
selection-strength. In this paper, we prove an important result, by which we
derive an annealing schedule called Cauchy annealing schedule. We demonstrate
the novelty of proposed annealing schedule using simulations in the framework
of genetic algorithms.
|
Proportional Conflict Redistribution Rules for Information Fusion
|
In this paper we propose five versions of a Proportional Conflict
Redistribution rule (PCR) for information fusion together with several
examples. From PCR1 to PCR2, PCR3, PCR4, PCR5 one increases the complexity of
the rules and also the exactitude of the redistribution of conflicting masses.
PCR1 restricted from the hyper-power set to the power set and without
degenerate cases gives the same result as the Weighted Average Operator (WAO)
proposed recently by J{\o}sang, Daniel and Vannoorenberghe but does not satisfy
the neutrality property of vacuous belief assignment. That's why improved PCR
rules are proposed in this paper. PCR4 is an improvement of minC and Dempster's
rules. The PCR rules redistribute the conflicting mass, after the conjunctive
rule has been applied, proportionally with some functions depending on the
masses assigned to their corresponding columns in the mass matrix. There are
infinitely many ways these functions (weighting factors) can be chosen
depending on the complexity one wants to deal with in specific applications and
fusion systems. Any fusion combination rule is at some degree ad-hoc.
|
The Generalized Pignistic Transformation
|
This paper presents in detail the generalized pignistic transformation (GPT)
succinctly developed in the Dezert-Smarandache Theory (DSmT) framework as a
tool for decision process. The GPT allows to provide a subjective probability
measure from any generalized basic belief assignment given by any corpus of
evidence. We mainly focus our presentation on the 3D case and provide the
complete result obtained by the GPT and its validation drawn from the
probability theory.
|
Unification of Fusion Theories
|
Since no fusion theory neither rule fully satisfy all needed applications,
the author proposes a Unification of Fusion Theories and a combination of
fusion rules in solving problems/applications. For each particular application,
one selects the most appropriate model, rule(s), and algorithm of
implementation. We are working in the unification of the fusion theories and
rules, which looks like a cooking recipe, better we'd say like a logical chart
for a computer programmer, but we don't see another method to comprise/unify
all things. The unification scenario presented herein, which is now in an
incipient form, should periodically be updated incorporating new discoveries
from the fusion and engineering research.
|
Normal forms for Answer Sets Programming
|
Normal forms for logic programs under stable/answer set semantics are
introduced. We argue that these forms can simplify the study of program
properties, mainly consistency. The first normal form, called the {\em kernel}
of the program, is useful for studying existence and number of answer sets. A
kernel program is composed of the atoms which are undefined in the Well-founded
semantics, which are those that directly affect the existence of answer sets.
The body of rules is composed of negative literals only. Thus, the kernel form
tends to be significantly more compact than other formulations. Also, it is
possible to check consistency of kernel programs in terms of colorings of the
Extended Dependency Graph program representation which we previously developed.
The second normal form is called {\em 3-kernel.} A 3-kernel program is composed
of the atoms which are undefined in the Well-founded semantics. Rules in
3-kernel programs have at most two conditions, and each rule either belongs to
a cycle, or defines a connection between cycles. 3-kernel programs may have
positive conditions. The 3-kernel normal form is very useful for the static
analysis of program consistency, i.e., the syntactic characterization of
existence of answer sets. This result can be obtained thanks to a novel
graph-like representation of programs, called Cycle Graph which presented in
the companion article \cite{Cos04b}.
|
An In-Depth Look at Information Fusion Rules & the Unification of Fusion
Theories
|
This paper may look like a glossary of the fusion rules and we also introduce
new ones presenting their formulas and examples: Conjunctive, Disjunctive,
Exclusive Disjunctive, Mixed Conjunctive-Disjunctive rules, Conditional rule,
Dempster's, Yager's, Smets' TBM rule, Dubois-Prade's, Dezert-Smarandache
classical and hybrid rules, Murphy's average rule,
Inagaki-Lefevre-Colot-Vannoorenberghe Unified Combination rules [and, as
particular cases: Iganaki's parameterized rule, Weighting Average Operator,
minC (M. Daniel), and newly Proportional Conflict Redistribution rules
(Smarandache-Dezert) among which PCR5 is the most exact way of redistribution
of the conflicting mass to non-empty sets following the path of the conjunctive
rule], Zhang's Center Combination rule, Convolutive x-Averaging, Consensus
Operator (Josang), Cautious Rule (Smets), ?-junctions rules (Smets), etc. and
three new T-norm & T-conorm rules adjusted from fuzzy and neutrosophic sets to
information fusion (Tchamova-Smarandache). Introducing the degree of union and
degree of inclusion with respect to the cardinal of sets not with the fuzzy set
point of view, besides that of intersection, many fusion rules can be improved.
There are corner cases where each rule might have difficulties working or may
not get an expected result.
|
Intransitivity and Vagueness
|
There are many examples in the literature that suggest that
indistinguishability is intransitive, despite the fact that the
indistinguishability relation is typically taken to be an equivalence relation
(and thus transitive). It is shown that if the uncertainty perception and the
question of when an agent reports that two things are indistinguishable are
both carefully modeled, the problems disappear, and indistinguishability can
indeed be taken to be an equivalence relation. Moreover, this model also
suggests a logic of vagueness that seems to solve many of the problems related
to vagueness discussed in the philosophical literature. In particular, it is
shown here how the logic can handle the sorites paradox.
|
Sleeping Beauty Reconsidered: Conditioning and Reflection in
Asynchronous Systems
|
A careful analysis of conditioning in the Sleeping Beauty problem is done,
using the formal model for reasoning about knowledge and probability developed
by Halpern and Tuttle. While the Sleeping Beauty problem has been viewed as
revealing problems with conditioning in the presence of imperfect recall, the
analysis done here reveals that the problems are not so much due to imperfect
recall as to asynchrony. The implications of this analysis for van Fraassen's
Reflection Principle and Savage's Sure-Thing Principle are considered.
|
Bounded Input Bounded Predefined Control Bounded Output
|
The paper is an attempt to generalize a methodology, which is similar to the
bounded-input bounded-output method currently widely used for the system
stability studies. The presented earlier methodology allows decomposition of
input space into bounded subspaces and defining for each subspace its bounding
surface. It also defines a corresponding predefined control, which maps any
point of a bounded input into a desired bounded output subspace. This
methodology was improved by providing a mechanism for the fast defining a
bounded surface. This paper presents enhanced bounded-input
bounded-predefined-control bounded-output approach, which provides adaptability
feature to the control and allows transferring of a controlled system along a
suboptimal trajectory.
|
Generating Conditional Probabilities for Bayesian Networks: Easing the
Knowledge Acquisition Problem
|
The number of probability distributions required to populate a conditional
probability table (CPT) in a Bayesian network, grows exponentially with the
number of parent-nodes associated with that table. If the table is to be
populated through knowledge elicited from a domain expert then the sheer
magnitude of the task forms a considerable cognitive barrier. In this paper we
devise an algorithm to populate the CPT while easing the extent of knowledge
acquisition. The input to the algorithm consists of a set of weights that
quantify the relative strengths of the influences of the parent-nodes on the
child-node, and a set of probability distributions the number of which grows
only linearly with the number of associated parent-nodes. These are elicited
from the domain expert. The set of probabilities are obtained by taking into
consideration the heuristics that experts use while arriving at probabilistic
estimations. The algorithm is used to populate the CPT by computing appropriate
weighted sums of the elicited distributions. We invoke the methods of
information geometry to demonstrate how these weighted sums capture the
expert's judgemental strategy.
|
Comparing Multi-Target Trackers on Different Force Unit Levels
|
Consider the problem of tracking a set of moving targets. Apart from the
tracking result, it is often important to know where the tracking fails, either
to steer sensors to that part of the state-space, or to inform a human operator
about the status and quality of the obtained information. An intuitive quality
measure is the correlation between two tracking results based on uncorrelated
observations. In the case of Bayesian trackers such a correlation measure could
be the Kullback-Leibler difference.
We focus on a scenario with a large number of military units moving in some
terrain. The units are observed by several types of sensors and "meta-sensors"
with force aggregation capabilities. The sensors register units of different
size. Two separate multi-target probability hypothesis density (PHD) particle
filters are used to track some type of units (e.g., companies) and their
sub-units (e.g., platoons), respectively, based on observations of units of
those sizes. Each observation is used in one filter only.
Although the state-space may well be the same in both filters, the posterior
PHD distributions are not directly comparable -- one unit might correspond to
three or four spatially distributed sub-units. Therefore, we introduce a
mapping function between distributions for different unit size, based on
doctrine knowledge of unit configuration.
The mapped distributions can now be compared -- locally or globally -- using
some measure, which gives the correlation between two PHD distributions in a
bounded volume of the state-space. To locate areas where the tracking fails, a
discretized quality map of the state-space can be generated by applying the
measure locally to different parts of the space.
|
Extremal optimization for sensor report pre-processing
|
We describe the recently introduced extremal optimization algorithm and apply
it to target detection and association problems arising in pre-processing for
multi-target tracking.
Here we consider the problem of pre-processing for multiple target tracking
when the number of sensor reports received is very large and arrives in large
bursts. In this case, it is sometimes necessary to pre-process reports before
sending them to tracking modules in the fusion system. The pre-processing step
associates reports to known tracks (or initializes new tracks for reports on
objects that have not been seen before). It could also be used as a pre-process
step before clustering, e.g., in order to test how many clusters to use.
The pre-processing is done by solving an approximate version of the original
problem. In this approximation, not all pair-wise conflicts are calculated. The
approximation relies on knowing how many such pair-wise conflicts that are
necessary to compute. To determine this, results on phase-transitions occurring
when coloring (or clustering) large random instances of a particular graph
ensemble are used.
|
The Combination of Paradoxical, Uncertain, and Imprecise Sources of
Information based on DSmT and Neutro-Fuzzy Inference
|
The management and combination of uncertain, imprecise, fuzzy and even
paradoxical or high conflicting sources of information has always been, and
still remains today, of primal importance for the development of reliable
modern information systems involving artificial reasoning. In this chapter, we
present a survey of our recent theory of plausible and paradoxical reasoning,
known as Dezert-Smarandache Theory (DSmT) in the literature, developed for
dealing with imprecise, uncertain and paradoxical sources of information. We
focus our presentation here rather on the foundations of DSmT, and on the two
important new rules of combination, than on browsing specific applications of
DSmT available in literature. Several simple examples are given throughout the
presentation to show the efficiency and the generality of this new approach.
The last part of this chapter concerns the presentation of the neutrosophic
logic, the neutro-fuzzy inference and its connection with DSmT. Fuzzy logic and
neutrosophic logic are useful tools in decision making after fusioning the
information using the DSm hybrid rule of combination of masses.
|
Learning to automatically detect features for mobile robots using
second-order Hidden Markov Models
|
In this paper, we propose a new method based on Hidden Markov Models to
interpret temporal sequences of sensor data from mobile robots to automatically
detect features. Hidden Markov Models have been used for a long time in pattern
recognition, especially in speech recognition. Their main advantages over other
methods (such as neural networks) are their ability to model noisy temporal
signals of variable length. We show in this paper that this approach is well
suited for interpretation of temporal sequences of mobile-robot sensor data. We
present two distinct experiments and results: the first one in an indoor
environment where a mobile robot learns to detect features like open doors or
T-intersections, the second one in an outdoor environment where a different
mobile robot has to identify situations like climbing a hill or crossing a
rock.
|
Inferring knowledge from a large semantic network
|
In this paper, we present a rich semantic network based on a differential
analysis. We then detail implemented measures that take into account common and
differential features between words. In a last section, we describe some
industrial applications.
|
Towards Automated Integration of Guess and Check Programs in Answer Set
Programming: A Meta-Interpreter and Applications
|
Answer set programming (ASP) with disjunction offers a powerful tool for
declaratively representing and solving hard problems. Many NP-complete problems
can be encoded in the answer set semantics of logic programs in a very concise
and intuitive way, where the encoding reflects the typical "guess and check"
nature of NP problems: The property is encoded in a way such that polynomial
size certificates for it correspond to stable models of a program. However, the
problem-solving capacity of full disjunctive logic programs (DLPs) is beyond
NP, and captures a class of problems at the second level of the polynomial
hierarchy. While these problems also have a clear "guess and check" structure,
finding an encoding in a DLP reflecting this structure may sometimes be a
non-obvious task, in particular if the "check" itself is a coNP-complete
problem; usually, such problems are solved by interleaving separate guess and
check programs, where the check is expressed by inconsistency of the check
program. In this paper, we present general transformations of head-cycle free
(extended) disjunctive logic programs into stratified and positive (extended)
disjunctive logic programs based on meta-interpretation techniques. The answer
sets of the original and the transformed program are in simple correspondence,
and, moreover, inconsistency of the original program is indicated by a
designated answer set of the transformed program. Our transformations
facilitate the integration of separate "guess" and "check" programs, which are
often easy to obtain, automatically into a single disjunctive logic program.
Our results complement recent results on meta-interpretation in ASP, and extend
methods and techniques for a declarative "guess and check" problem solving
paradigm through ASP.
|
Clever Search: A WordNet Based Wrapper for Internet Search Engines
|
This paper presents an approach to enhance search engines with information
about word senses available in WordNet. The approach exploits information about
the conceptual relations within the lexical-semantic net. In the wrapper for
search engines presented, WordNet information is used to specify user's request
or to classify the results of a publicly available web search engine, like
google, yahoo, etc.
|
Issues in Exploiting GermaNet as a Resource in Real Applications
|
This paper reports about experiments with GermaNet as a resource within
domain specific document analysis. The main question to be answered is: How is
the coverage of GermaNet in a specific domain? We report about results of a
field test of GermaNet for analyses of autopsy protocols and present a sketch
about the integration of GermaNet inside XDOC. Our remarks will contribute to a
GermaNet user's wish list.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.