title
string | abstract
string |
|---|---|
The Computational Complexity of Probabilistic Planning
|
We examine the computational complexity of testing and finding small plans in
probabilistic planning domains with both flat and propositional
representations. The complexity of plan evaluation and existence varies with
the plan type sought; we examine totally ordered plans, acyclic plans, and
looping plans, and partially ordered plans under three natural definitions of
plan value. We show that problems of interest are complete for a variety of
complexity classes: PL, P, NP, co-NP, PP, NP^PP, co-NP^PP, and PSPACE. In the
process of proving that certain planning problems are complete for NP^PP, we
introduce a new basic NP^PP-complete problem, E-MAJSAT, which generalizes the
standard Boolean satisfiability problem to computations involving probabilistic
quantities; our results suggest that the development of good heuristics for
E-MAJSAT could be important for the creation of efficient algorithms for a wide
variety of problems.
|
SYNERGY: A Linear Planner Based on Genetic Programming
|
In this paper we describe SYNERGY, which is a highly parallelizable, linear
planning system that is based on the genetic programming paradigm. Rather than
reasoning about the world it is planning for, SYNERGY uses artificial
selection, recombination and fitness measure to generate linear plans that
solve conjunctive goals. We ran SYNERGY on several domains (e.g., the briefcase
problem and a few variants of the robot navigation problem), and the
experimental results show that our planner is capable of handling problem
instances that are one to two orders of magnitude larger than the ones solved
by UCPOP. In order to facilitate the search reduction and to enhance the
expressive power of SYNERGY, we also propose two major extensions to our
planning system: a formalism for using hierarchical planning operators, and a
framework for planning in dynamic environments.
|
The Essence of Constraint Propagation
|
We show that several constraint propagation algorithms (also called (local)
consistency, consistency enforcing, Waltz, filtering or narrowing algorithms)
are instances of algorithms that deal with chaotic iteration. To this end we
propose a simple abstract framework that allows us to classify and compare
these algorithms and to establish in a uniform way their basic properties.
|
Towards a computational theory of human daydreaming
|
This paper examines the phenomenon of daydreaming: spontaneously recalling or
imagining personal or vicarious experiences in the past or future. The
following important roles of daydreaming in human cognition are postulated:
plan preparation and rehearsal, learning from failures and successes, support
for processes of creativity, emotion regulation, and motivation.
A computational theory of daydreaming and its implementation as the program
DAYDREAMER are presented. DAYDREAMER consists of 1) a scenario generator based
on relaxed planning, 2) a dynamic episodic memory of experiences used by the
scenario generator, 3) a collection of personal goals and control goals which
guide the scenario generator, 4) an emotion component in which daydreams
initiate, and are initiated by, emotional states arising from goal outcomes,
and 5) domain knowledge of interpersonal relations and common everyday
occurrences.
The role of emotions and control goals in daydreaming is discussed. Four
control goals commonly used in guiding daydreaming are presented:
rationalization, failure/success reversal, revenge, and preparation. The role
of episodic memory in daydreaming is considered, including how daydreamed
information is incorporated into memory and later used. An initial version of
DAYDREAMER which produces several daydreams (in English) is currently running.
|
A reusable iterative optimization software library to solve
combinatorial problems with approximate reasoning
|
Real world combinatorial optimization problems such as scheduling are
typically too complex to solve with exact methods. Additionally, the problems
often have to observe vaguely specified constraints of different importance,
the available data may be uncertain, and compromises between antagonistic
criteria may be necessary. We present a combination of approximate reasoning
based constraints and iterative optimization based heuristics that help to
model and solve such problems in a framework of C++ software libraries called
StarFLIP++. While initially developed to schedule continuous caster units in
steel plants, we present in this paper results from reusing the library
components in a shift scheduling system for the workforce of an industrial
production plant.
|
Modeling Belief in Dynamic Systems, Part II: Revision and Update
|
The study of belief change has been an active area in philosophy and AI. In
recent years two special cases of belief change, belief revision and belief
update, have been studied in detail. In a companion paper (Friedman & Halpern,
1997), we introduce a new framework to model belief change. This framework
combines temporal and epistemic modalities with a notion of plausibility,
allowing us to examine the change of beliefs over time. In this paper, we show
how belief revision and belief update can be captured in our framework. This
allows us to compare the assumptions made by each method, and to better
understand the principles underlying them. In particular, it shows that Katsuno
and Mendelzon's notion of belief update (Katsuno & Mendelzon, 1991a) depends on
several strong assumptions that may limit its applicability in artificial
intelligence. Finally, our analysis allow us to identify a notion of minimal
change that underlies a broad range of belief change operations including
revision and update.
|
The Symbol Grounding Problem
|
How can the semantic interpretation of a formal symbol system be made
intrinsic to the system, rather than just parasitic on the meanings in our
heads? How can the meanings of the meaningless symbol tokens, manipulated
solely on the basis of their (arbitrary) shapes, be grounded in anything but
other meaningless symbols? The problem is analogous to trying to learn Chinese
from a Chinese/Chinese dictionary alone. A candidate solution is sketched:
Symbolic representations must be grounded bottom-up in nonsymbolic
representations of two kinds: (1) "iconic representations," which are analogs
of the proximal sensory projections of distal objects and events, and (2)
"categorical representations," which are learned and innate feature-detectors
that pick out the invariant features of object and event categories from their
sensory projections. Elementary symbols are the names of these object and event
categories, assigned on the basis of their (nonsymbolic) categorical
representations. Higher-order (3) "symbolic representations," grounded in these
elementary symbols, consist of symbol strings describing category membership
relations (e.g., "An X is a Y that is Z").
|
Iterative Deepening Branch and Bound
|
In tree search problem the best-first search algorithm needs too much of
space . To remove such drawbacks of these algorithms the IDA* was developed
which is both space and time cost efficient. But again IDA* can give an optimal
solution for real valued problems like Flow shop scheduling, Travelling
Salesman and 0/1 Knapsack due to their real valued cost estimates. Thus further
modifications are done on it and the Iterative Deepening Branch and Bound
Search Algorithms is developed which meets the requirements. We have tried
using this algorithm for the Flow Shop Scheduling Problem and have found that
it is quite effective.
|
Probabilistic Agent Programs
|
Agents are small programs that autonomously take actions based on changes in
their environment or ``state.'' Over the last few years, there have been an
increasing number of efforts to build agents that can interact and/or
collaborate with other agents. In one of these efforts, Eiter, Subrahmanian amd
Pick (AIJ, 108(1-2), pages 179-255) have shown how agents may be built on top
of legacy code. However, their framework assumes that agent states are
completely determined, and there is no uncertainty in an agent's state. Thus,
their framework allows an agent developer to specify how his agents will react
when the agent is 100% sure about what is true/false in the world state. In
this paper, we propose the concept of a \emph{probabilistic agent program} and
show how, given an arbitrary program written in any imperative language, we may
build a declarative ``probabilistic'' agent program on top of it which supports
decision making in the presence of uncertainty. We provide two alternative
semantics for probabilistic agent programs. We show that the second semantics,
though more epistemically appealing, is more complex to compute. We provide
sound and complete algorithms to compute the semantics of \emph{positive} agent
programs.
|
Cox's Theorem Revisited
|
The assumptions needed to prove Cox's Theorem are discussed and examined.
Various sets of assumptions under which a Cox-style theorem can be proved are
provided, although all are rather strong and, arguably, not natural.
|
Uniform semantic treatment of default and autoepistemic logics
|
We revisit the issue of connections between two leading formalisms in
nonmonotonic reasoning: autoepistemic logic and default logic. For each logic
we develop a comprehensive semantic framework based on the notion of a belief
pair. The set of all belief pairs together with the so called knowledge
ordering forms a complete lattice. For each logic, we introduce several
semantics by means of fixpoints of operators on the lattice of belief pairs.
Our results elucidate an underlying isomorphism of the respective semantic
constructions. In particular, we show that the interpretation of defaults as
modal formulas proposed by Konolige allows us to represent all semantics for
default logic in terms of the corresponding semantics for autoepistemic logic.
Thus, our results conclusively establish that default logic can indeed be
viewed as a fragment of autoepistemic logic. However, as we also demonstrate,
the semantics of Moore and Reiter are given by different operators and occupy
different locations in their corresponding families of semantics. This result
explains the source of the longstanding difficulty to formally relate these two
semantics. In the paper, we also discuss approximating skeptical reasoning with
autoepistemic and default logics and establish constructive principles behind
such approximations.
|
On the accuracy and running time of GSAT
|
Randomized algorithms for deciding satisfiability were shown to be effective
in solving problems with thousands of variables. However, these algorithms are
not complete. That is, they provide no guarantee that a satisfying assignment,
if one exists, will be found. Thus, when studying randomized algorithms, there
are two important characteristics that need to be considered: the running time
and, even more importantly, the accuracy --- a measure of likelihood that a
satisfying assignment will be found, provided one exists. In fact, we argue
that without a reference to the accuracy, the notion of the running time for
randomized algorithms is not well-defined. In this paper, we introduce a formal
notion of accuracy. We use it to define a concept of the running time. We use
both notions to study the random walk strategy GSAT algorithm. We investigate
the dependence of accuracy on properties of input formulas such as
clause-to-variable ratio and the number of satisfying assignments. We
demonstrate that the running time of GSAT grows exponentially in the number of
variables of the input formula for randomly generated 3-CNF formulas and for
the formulas encoding 3- and 4-colorability of graphs.
|
Syntactic Autonomy: Why There is no Autonomy without Symbols and How
Self-Organization Might Evolve Them
|
Two different types of agency are discussed based on dynamically coherent and
incoherent couplings with an environment respectively. I propose that until a
private syntax (syntactic autonomy) is discovered by dynamically coherent
agents, there are no significant or interesting types of closure or autonomy.
When syntactic autonomy is established, then, because of a process of
description-based selected self-organization, open-ended evolution is enabled.
At this stage, agents depend, in addition to dynamics, on localized, symbolic
memory, thus adding a level of dynamical incoherence to their interaction with
the environment. Furthermore, it is the appearance of syntactic autonomy which
enables much more interesting types of closures amongst agents which share the
same syntax. To investigate how we can study the emergence of syntax from
dynamical systems, experiments with cellular automata leading to emergent
computation to solve non-trivial tasks are discussed. RNA editing is also
mentioned as a process that may have been used to obtain a primordial
biological code necessary open-ended evolution.
|
Consistency Management of Normal Logic Program by Top-down Abductive
Proof Procedure
|
This paper presents a method of computing a revision of a function-free
normal logic program. If an added rule is inconsistent with a program, that is,
if it leads to a situation such that no stable model exists for a new program,
then deletion and addition of rules are performed to avoid inconsistency. We
specify a revision by translating a normal logic program into an abductive
logic program with abducibles to represent deletion and addition of rules. To
compute such deletion and addition, we propose an adaptation of our top-down
abductive proof procedure to compute a relevant abducibles to an added rule. We
compute a minimally revised program, by choosing a minimal set of abducibles
among all the sets of abducibles computed by a top-down proof procedure.
|
Defeasible Reasoning in OSCAR
|
This is a system description for the OSCAR defeasible reasoner.
|
Abductive and Consistency-Based Diagnosis Revisited: a Modeling
Perspective
|
Diagnostic reasoning has been characterized logically as consistency-based
reasoning or abductive reasoning. Previous analyses in the literature have
shown, on the one hand, that choosing the (in general more restrictive)
abductive definition may be appropriate or not, depending on the content of the
knowledge base [Console&Torasso91], and, on the other hand, that, depending on
the choice of the definition the same knowledge should be expressed in
different form [Poole94].
Since in Model-Based Diagnosis a major problem is finding the right way of
abstracting the behavior of the system to be modeled, this paper discusses the
relation between modeling, and in particular abstraction in the model, and the
notion of diagnosis.
|
ACLP: Integrating Abduction and Constraint Solving
|
ACLP is a system which combines abductive reasoning and constraint solving by
integrating the frameworks of Abductive Logic Programming (ALP) and Constraint
Logic Programming (CLP). It forms a general high-level knowledge representation
environment for abductive problems in Artificial Intelligence and other areas.
In ACLP, the task of abduction is supported and enhanced by its non-trivial
integration with constraint solving facilitating its application to complex
problems. The ACLP system is currently implemented on top of the CLP language
of ECLiPSe as a meta-interpreter exploiting its underlying constraint solver
for finite domains. It has been applied to the problems of planning and
scheduling in order to test its computational effectiveness compared with the
direct use of the (lower level) constraint solving framework of CLP on which it
is built. These experiments provide evidence that the abductive framework of
ACLP does not compromise significantly the computational efficiency of the
solutions. Other experiments show the natural ability of ACLP to accommodate
easily and in a robust way new or changing requirements of the original
problem.
|
Relevance Sensitive Non-Monotonic Inference on Belief Sequences
|
We present a method for relevance sensitive non-monotonic inference from
belief sequences which incorporates insights pertaining to prioritized
inference and relevance sensitive, inconsistency tolerant belief revision.
Our model uses a finite, logically open sequence of propositional formulas as
a representation for beliefs and defines a notion of inference from
maxiconsistent subsets of formulas guided by two orderings: a temporal
sequencing and an ordering based on relevance relations between the conclusion
and formulas in the sequence. The relevance relations are ternary (using
context as a parameter) as opposed to standard binary axiomatizations. The
inference operation thus defined easily handles iterated revision by
maintaining a revision history, blocks the derivation of inconsistent answers
from a possibly inconsistent sequence and maintains the distinction between
explicit and implicit beliefs. In doing so, it provides a finitely presented
formalism and a plausible model of reasoning for automated agents.
|
Probabilistic Default Reasoning with Conditional Constraints
|
We propose a combination of probabilistic reasoning from conditional
constraints with approaches to default reasoning from conditional knowledge
bases. In detail, we generalize the notions of Pearl's entailment in system Z,
Lehmann's lexicographic entailment, and Geffner's conditional entailment to
conditional constraints. We give some examples that show that the new notions
of z-, lexicographic, and conditional entailment have similar properties like
their classical counterparts. Moreover, we show that the new notions of z-,
lexicographic, and conditional entailment are proper generalizations of both
their classical counterparts and the classical notion of logical entailment for
conditional constraints.
|
A Compiler for Ordered Logic Programs
|
This paper describes a system, called PLP, for compiling ordered logic
programs into standard logic programs under the answer set semantics. In an
ordered logic program, rules are named by unique terms, and preferences among
rules are given by a set of dedicated atoms. An ordered logic program is
transformed into a second, regular, extended logic program wherein the
preferences are respected, in that the answer sets obtained in the transformed
theory correspond with the preferred answer sets of the original theory. Since
the result of the translation is an extended logic program, existing logic
programming systems can be used as underlying reasoning engine. In particular,
PLP is conceived as a front-end to the logic programming systems dlv and
smodels.
|
SLDNFA-system
|
The SLDNFA-system results from the LP+ project at the K.U.Leuven, which
investigates logics and proof procedures for these logics for declarative
knowledge representation. Within this project inductive definition logic
(ID-logic) is used as representation logic. Different solvers are being
developed for this logic and one of these is SLDNFA. A prototype of the system
is available and used for investigating how to solve efficiently problems
represented in ID-logic.
|
Logic Programs with Compiled Preferences
|
We describe an approach for compiling preferences into logic programs under
the answer set semantics. An ordered logic program is an extended logic program
in which rules are named by unique terms, and in which preferences among rules
are given by a set of dedicated atoms. An ordered logic program is transformed
into a second, regular, extended logic program wherein the preferences are
respected, in that the answer sets obtained in the transformed theory
correspond with the preferred answer sets of the original theory. Our approach
allows both the specification of static orderings (as found in most previous
work), in which preferences are external to a logic program, as well as
orderings on sets of rules. In large part then, we are interested in describing
a general methodology for uniformly incorporating preference information in a
logic program. Since the result of our translation is an extended logic
program, we can make use of existing implementations, such as dlv and smodels.
To this end, we have developed a compiler, available on the web, as a front-end
for these programming systems.
|
Fuzzy Approaches to Abductive Inference
|
This paper proposes two kinds of fuzzy abductive inference in the framework
of fuzzy rule base. The abductive inference processes described here depend on
the semantic of the rule. We distinguish two classes of interpretation of a
fuzzy rule, certainty generation rules and possible generation rules. In this
paper we present the architecture of abductive inference in the first class of
interpretation. We give two kinds of problem that we can resolve by using the
proposed models of inference.
|
Problem solving in ID-logic with aggregates: some experiments
|
The goal of the LP+ project at the K.U.Leuven is to design an expressive
logic, suitable for declarative knowledge representation, and to develop
intelligent systems based on Logic Programming technology for solving
computational problems using the declarative specifications. The ID-logic is an
integration of typed classical logic and a definition logic. Different
abductive solvers for this language are being developed. This paper is a report
of the integration of high order aggregates into ID-logic and the consequences
on the solver SLDNFA.
|
Optimal Belief Revision
|
We propose a new approach to belief revision that provides a way to change
knowledge bases with a minimum of effort. We call this way of revising belief
states optimal belief revision. Our revision method gives special attention to
the fact that most belief revision processes are directed to a specific
informational objective. This approach to belief change is founded on notions
such as optimal context and accessibility. For the sentential model of belief
states we provide both a formal description of contexts as sub-theories
determined by three parameters and a method to construct contexts. Next, we
introduce an accessibility ordering for belief sets, which we then use for
selecting the best (optimal) contexts with respect to the processing effort
involved in the revision. Then, for finitely axiomatizable knowledge bases, we
characterize a finite accessibility ranking from which the accessibility
ordering for the entire base is generated and show how to determine the ranking
of an arbitrary sentence in the language. Finally, we define the adjustment of
the accessibility ranking of a revised base of a belief set.
|
cc-Golog: Towards More Realistic Logic-Based Robot Controllers
|
High-level robot controllers in realistic domains typically deal with
processes which operate concurrently, change the world continuously, and where
the execution of actions is event-driven as in ``charge the batteries as soon
as the voltage level is low''. While non-logic-based robot control languages
are well suited to express such scenarios, they fare poorly when it comes to
projecting, in a conspicuous way, how the world evolves when actions are
executed. On the other hand, a logic-based control language like \congolog,
based on the situation calculus, is well-suited for the latter. However, it has
problems expressing event-driven behavior. In this paper, we show how these
problems can be overcome by first extending the situation calculus to support
continuous change and event-driven behavior and then presenting \ccgolog, a
variant of \congolog which is based on the extended situation calculus. One
benefit of \ccgolog is that it narrows the gap in expressiveness compared to
non-logic-based control languages while preserving a semantically well-founded
projection mechanism.
|
Smodels: A System for Answer Set Programming
|
The Smodels system implements the stable model semantics for normal logic
programs. It handles a subclass of programs which contain no function symbols
and are domain-restricted but supports extensions including built-in functions
as well as cardinality and weight constraints. On top of this core engine more
involved systems can be built. As an example, we have implemented total and
partial stable model computation for disjunctive logic programs. An interesting
application method is based on answer set programming, i.e., encoding an
application problem as a set of rules so that its solutions are captured by the
stable models of the rules. Smodels has been applied to a number of areas
including planning, model checking, reachability analysis, product
configuration, dynamic constraint satisfaction, and feature interaction.
|
E-RES: A System for Reasoning about Actions, Events and Observations
|
E-RES is a system that implements the Language E, a logic for reasoning about
narratives of action occurrences and observations. E's semantics is
model-theoretic, but this implementation is based on a sound and complete
reformulation of E in terms of argumentation, and uses general computational
techniques of argumentation frameworks. The system derives sceptical
non-monotonic consequences of a given reformulated theory which exactly
correspond to consequences entailed by E's model-theory. The computation relies
on a complimentary ability of the system to derive credulous non-monotonic
consequences together with a set of supporting assumptions which is sufficient
for the (credulous) conclusion to hold. E-RES allows theories to contain
general action laws, statements about action occurrences, observations and
statements of ramifications (or universal laws). It is able to derive
consequences both forward and backward in time. This paper gives a short
overview of the theoretical basis of E-RES and illustrates its use on a variety
of examples. Currently, E-RES is being extended so that the system can be used
for planning.
|
QUIP - A Tool for Computing Nonmonotonic Reasoning Tasks
|
In this paper, we outline the prototype of an automated inference tool,
called QUIP, which provides a uniform implementation for several nonmonotonic
reasoning formalisms. The theoretical basis of QUIP is derived from well-known
results about the computational complexity of nonmonotonic logics and exploits
a representation of the different reasoning tasks in terms of quantified
boolean formulae.
|
A Splitting Set Theorem for Epistemic Specifications
|
Over the past decade a considerable amount of research has been done to
expand logic programming languages to handle incomplete information. One such
language is the language of epistemic specifications. As is usual with logic
programming languages, the problem of answering queries is intractable in the
general case. For extended disjunctive logic programs, an idea that has proven
useful in simplifying the investigation of answer sets is the use of splitting
sets. In this paper we will present an extended definition of splitting sets
that will be applicable to epistemic specifications. Furthermore, an extension
of the splitting set theorem will be presented. Also, a characterization of
stratified epistemic specifications will be given in terms of splitting sets.
This characterization leads us to an algorithmic method of computing world
views of a subclass of epistemic logic programs.
|
DES: a Challenge Problem for Nonmonotonic Reasoning Systems
|
The US Data Encryption Standard, DES for short, is put forward as an
interesting benchmark problem for nonmonotonic reasoning systems because (i) it
provides a set of test cases of industrial relevance which shares features of
randomly generated problems and real-world problems, (ii) the representation of
DES using normal logic programs with the stable model semantics is simple and
easy to understand, and (iii) this subclass of logic programs can be seen as an
interesting special case for many other formalizations of nonmonotonic
reasoning. In this paper we present two encodings of DES as logic programs: a
direct one out of the standard specifications and an optimized one extending
the work of Massacci and Marraro. The computational properties of the encodings
are studied by using them for DES key search with the Smodels system as the
implementation of the stable model semantics. Results indicate that the
encodings and Smodels are quite competitive: they outperform state-of-the-art
SAT-checkers working with an optimized encoding of DES into SAT and are
comparable with a SAT-checker that is customized and tuned for the optimized
SAT encoding.
|
Fages' Theorem and Answer Set Programming
|
We generalize a theorem by Francois Fages that describes the relationship
between the completion semantics and the answer set semantics for logic
programs with negation as failure. The study of this relationship is important
in connection with the emergence of answer set programming. Whenever the two
semantics are equivalent, answer sets can be computed by a satisfiability
solver, and the use of answer set solvers such as smodels and dlv is
unnecessary. A logic programming representation of the blocks world due to
Ilkka Niemelae is discussed as an example.
|
On the tractable counting of theory models and its application to belief
revision and truth maintenance
|
We introduced decomposable negation normal form (DNNF) recently as a
tractable form of propositional theories, and provided a number of powerful
logical operations that can be performed on it in polynomial time. We also
presented an algorithm for compiling any conjunctive normal form (CNF) into
DNNF and provided a structure-based guarantee on its space and time complexity.
We present in this paper a linear-time algorithm for converting an ordered
binary decision diagram (OBDD) representation of a propositional theory into an
equivalent DNNF, showing that DNNFs scale as well as OBDDs. We also identify a
subclass of DNNF which we call deterministic DNNF, d-DNNF, and show that the
previous complexity guarantees on compiling DNNF continue to hold for this
stricter subclass, which has stronger properties. In particular, we present a
new operation on d-DNNF which allows us to count its models under the
assertion, retraction and flipping of every literal by traversing the d-DNNF
twice. That is, after such traversal, we can test in constant-time: the
entailment of any literal by the d-DNNF, and the consistency of the d-DNNF
under the retraction or flipping of any literal. We demonstrate the
significance of these new operations by showing how they allow us to implement
linear-time, complete truth maintenance systems and linear-time, complete
belief revision systems for two important classes of propositional theories.
|
BDD-based reasoning in the fluent calculus - first results
|
The paper reports on first preliminary results and insights gained in a
project aiming at implementing the fluent calculus using methods and techniques
based on binary decision diagrams. After reporting on an initial experiment
showing promising results we discuss our findings concerning various techniques
and heuristics used to speed up the reasoning process.
|
Planning with Incomplete Information
|
Planning is a natural domain of application for frameworks of reasoning about
actions and change. In this paper we study how one such framework, the Language
E, can form the basis for planning under (possibly) incomplete information. We
define two types of plans: weak and safe plans, and propose a planner, called
the E-Planner, which is often able to extend an initial weak plan into a safe
plan even though the (explicit) information available is incomplete, e.g. for
cases where the initial state is not completely known. The E-Planner is based
upon a reformulation of the Language E in argumentation terms and a natural
proof theory resulting from the reformulation. It uses an extension of this
proof theory by means of abduction for the generation of plans and adopts
argumentation-based techniques for extending weak plans into safe plans. We
provide representative examples illustrating the behaviour of the E-Planner, in
particular for cases where the status of fluents is incompletely known.
|
Local Diagnosis
|
In an earlier work, we have presented operations of belief change which only
affect the relevant part of a belief base. In this paper, we propose the
application of the same strategy to the problem of model-based diangosis. We
first isolate the subset of the system description which is relevant for a
given observation and then solve the diagnosis problem for this subset.
|
A Consistency-Based Model for Belief Change: Preliminary Report
|
We present a general, consistency-based framework for belief change.
Informally, in revising K by A, we begin with A and incorporate as much of K as
consistently possible. Formally, a knowledge base K and sentence A are
expressed, via renaming propositions in K, in separate languages. Using a
maximization process, we assume the languages are the same insofar as
consistently possible. Lastly, we express the resultant knowledge base in a
single language. There may be more than one way in which A can be so extended
by K: in choice revision, one such ``extension'' represents the revised state;
alternately revision consists of the intersection of all such extensions.
The most general formulation of our approach is flexible enough to express
other approaches to revision and update, the merging of knowledge bases, and
the incorporation of static and dynamic integrity constraints. Our framework
differs from work based on ordinal conditional functions, notably with respect
to iterated revision. We argue that the approach is well-suited for
implementation: the choice revision operator gives better complexity results
than general revision; the approach can be expressed in terms of a finite
knowledge base; and the scope of a revision can be restricted to just those
propositions mentioned in the sentence for revision A.
|
SATEN: An Object-Oriented Web-Based Revision and Extraction Engine
|
SATEN is an object-oriented web-based extraction and belief revision engine.
It runs on any computer via a Java 1.1 enabled browser such as Netscape 4.
SATEN performs belief revision based on the AGM approach. The extraction and
belief revision reasoning engines operate on a user specified ranking of
information. One of the features of SATEN is that it can be used to integrate
mutually inconsistent commensuate rankings into a consistent ranking.
|
dcs: An Implementation of DATALOG with Constraints
|
Answer-set programming (ASP) has emerged recently as a viable programming
paradigm. We describe here an ASP system, DATALOG with constraints or DC, based
on non-monotonic logic. Informally, DC theories consist of propositional
clauses (constraints) and of Horn rules. The semantics is a simple and natural
extension of the semantics of the propositional logic. However, thanks to the
presence of Horn rules in the system, modeling of transitive closure becomes
straightforward. We describe the syntax, use and implementation of DC and
provide experimental results.
|
DATALOG with constraints - an answer-set programming system
|
Answer-set programming (ASP) has emerged recently as a viable programming
paradigm well attuned to search problems in AI, constraint satisfaction and
combinatorics. Propositional logic is, arguably, the simplest ASP system with
an intuitive semantics supporting direct modeling of problem constraints.
However, for some applications, especially those requiring that transitive
closure be computed, it requires additional variables and results in large
theories. Consequently, it may not be a practical computational tool for such
problems. On the other hand, ASP systems based on nonmonotonic logics, such as
stable logic programming, can handle transitive closure computation efficiently
and, in general, yield very concise theories as problem representations. Their
semantics is, however, more complex. Searching for the middle ground, in this
paper we introduce a new nonmonotonic logic, DATALOG with constraints or DC.
Informally, DC theories consist of propositional clauses (constraints) and of
Horn rules. The semantics is a simple and natural extension of the semantics of
the propositional logic. However, thanks to the presence of Horn rules in the
system, modeling of transitive closure becomes straightforward. We describe the
syntax and semantics of DC, and study its properties. We discuss an
implementation of DC and present results of experimental study of the
effectiveness of DC, comparing it with CSAT, a satisfiability checker and
SMODELS implementation of stable logic programming. Our results show that DC is
competitive with the other two approaches, in case of many search problems,
often yielding much more efficient solutions.
|
Some Remarks on Boolean Constraint Propagation
|
We study here the well-known propagation rules for Boolean constraints. First
we propose a simple notion of completeness for sets of such rules and establish
a completeness result. Then we show an equivalence in an appropriate sense
between Boolean constraint propagation and unit propagation, a form of
resolution for propositional logic.
Subsequently we characterize one set of such rules by means of the notion of
hyper-arc consistency introduced in (Mohr and Masini 1988). Also, we clarify
the status of a similar, though different, set of rules introduced in (Simonis
1989a) and more fully in (Codognet and Diaz 1996).
|
Conditional Plausibility Measures and Bayesian Networks
|
A general notion of algebraic conditional plausibility measures is defined.
Probability measures, ranking functions, possibility measures, and (under the
appropriate definitions) sets of probability measures can all be viewed as
defining algebraic conditional plausibility measures. It is shown that
algebraic conditional plausibility measures can be represented using Bayesian
networks.
|
Constraint compiling into rules formalism constraint compiling into
rules formalism for dynamic CSPs computing
|
In this paper we present a rule based formalism for filtering variables
domains of constraints. This formalism is well adapted for solving dynamic CSP.
We take diagnosis as an instance problem to illustrate the use of these rules.
A diagnosis problem is seen like finding all the minimal sets of constraints to
be relaxed in the constraint network that models the device to be diagnosed
|
Brainstorm/J: a Java Framework for Intelligent Agents
|
Despite the effort of many researchers in the area of multi-agent systems
(MAS) for designing and programming agents, a few years ago the research
community began to take into account that common features among different MAS
exists. Based on these common features, several tools have tackled the problem
of agent development on specific application domains or specific types of
agents. As a consequence, their scope is restricted to a subset of the huge
application domain of MAS. In this paper we propose a generic infrastructure
for programming agents whose name is Brainstorm/J. The infrastructure has been
implemented as an object oriented framework. As a consequence, our approach
supports a broader scope of MAS applications than previous efforts, being
flexible and reusable.
|
On the relationship between fuzzy logic and four-valued relevance logic
|
In fuzzy propositional logic, to a proposition a partial truth in [0,1] is
assigned. It is well known that under certain circumstances, fuzzy logic
collapses to classical logic. In this paper, we will show that under dual
conditions, fuzzy logic collapses to four-valued (relevance) logic, where
propositions have truth-value true, false, unknown, or contradiction. As a
consequence, fuzzy entailment may be considered as ``in between'' four-valued
(relevance) entailment and classical entailment.
|
Causes and Explanations: A Structural-Model Approach, Part I: Causes
|
We propose a new definition of actual cause, using structural equations to
model counterfactuals. We show that the definition yields a plausible and
elegant account of causation that handles well examples which have caused
problems for other definitions and resolves major difficulties in the
traditional account.
|
Logic Programming Approaches for Representing and Solving Constraint
Satisfaction Problems: A Comparison
|
Many logic programming based approaches can be used to describe and solve
combinatorial search problems. On the one hand there is constraint logic
programming which computes a solution as an answer substitution to a query
containing the variables of the constraint satisfaction problem. On the other
hand there are systems based on stable model semantics, abductive systems, and
first order logic model generators which compute solutions as models of some
theory. This paper compares these different approaches from the point of view
of knowledge representation (how declarative are the programs) and from the
point of view of performance (how good are they at solving typical problems).
|
Multi-Channel Parallel Adaptation Theory for Rule Discovery
|
In this paper, we introduce a new machine learning theory based on
multi-channel parallel adaptation for rule discovery. This theory is
distinguished from the familiar parallel-distributed adaptation theory of
neural networks in terms of channel-based convergence to the target rules. We
show how to realize this theory in a learning system named CFRule. CFRule is a
parallel weight-based model, but it departs from traditional neural computing
in that its internal knowledge is comprehensible. Furthermore, when the model
converges upon training, each channel converges to a target rule. The model
adaptation rule is derived by multi-level parallel weight optimization based on
gradient descent. Since, however, gradient descent only guarantees local
optimization, a multi-channel regression-based optimization strategy is
developed to effectively deal with this problem. Formally, we prove that the
CFRule model can explicitly and precisely encode any given rule set. Also, we
prove a property related to asynchronous parallel convergence, which is a
critical element of the multi-channel parallel adaptation theory for rule
learning. Thanks to the quantizability nature of the CFRule model, rules can be
extracted completely and soundly via a threshold-based mechanism. Finally, the
practical application of the theory is demonstrated in DNA promoter recognition
and hepatitis prognosis prediction.
|
A Constraint-Driven System for Contract Assembly
|
We present an approach for modelling the structure and coarse content of
legal documents with a view to providing automated support for the drafting of
contracts and contract database retrieval. The approach is designed to be
applicable where contract drafting is based on model-form contracts or on
existing examples of a similar type. The main features of the approach are: (1)
the representation addresses the structure and the interrelationships between
the constituent parts of contracts, but not the text of the document itself;
(2) the representation of documents is separated from the mechanisms that
manipulate it; and (3) the drafting process is subject to a collection of
explicitly stated constraints that govern the structure of the documents. We
describe the representation of document instances and of 'generic documents',
which are data structures used to drive the creation of new document instances,
and we show extracts from a sample session to illustrate the features of a
prototype system implemented in MacProlog.
|
Modelling Contractual Arguments
|
One influential approach to assessing the "goodness" of arguments is offered
by the Pragma-Dialectical school (p-d) (Eemeren & Grootendorst 1992). This can
be compared with Rhetorical Structure Theory (RST) (Mann & Thompson 1988), an
approach that originates in discourse analysis. In p-d terms an argument is
good if it avoids committing a fallacy, whereas in RST terms an argument is
good if it is coherent. RST has been criticised (Snoeck Henkemans 1997) for
providing only a partially functional account of argument, and similar
criticisms have been raised in the Natural Language Generation (NLG)
community-particularly by Moore & Pollack (1992)- with regards to its account
of intentionality in text in general. Mann and Thompson themselves note that
although RST can be successfully applied to a wide range of texts from diverse
domains, it fails to characterise some types of text, most notably legal
contracts. There is ongoing research in the Artificial Intelligence and Law
community exploring the potential for providing electronic support to contract
negotiators, focusing on long-term, complex engineering agreements (see for
example Daskalopulu & Sergot 1997). This paper provides a brief introduction to
RST and illustrates its shortcomings with respect to contractual text. An
alternative approach for modelling argument structure is presented which not
only caters for contractual text, but also overcomes the aforementioned
limitations of RST.
|
Information Integration and Computational Logic
|
Information Integration is a young and exciting field with enormous research
and commercial significance in the new world of the Information Society. It
stands at the crossroad of Databases and Artificial Intelligence requiring
novel techniques that bring together different methods from these fields.
Information from disparate heterogeneous sources often with no a-priori common
schema needs to be synthesized in a flexible, transparent and intelligent way
in order to respond to the demands of a query thus enabling a more informed
decision by the user or application program. The field although relatively
young has already found many practical applications particularly for
integrating information over the World Wide Web. This paper gives a brief
introduction of the field highlighting some of the main current and future
research issues and application areas. It attempts to evaluate the current and
potential role of Computational Logic in this and suggests some of the problems
where logic-based techniques could be used.
|
Enhancing Constraint Propagation with Composition Operators
|
Constraint propagation is a general algorithmic approach for pruning the
search space of a CSP. In a uniform way, K. R. Apt has defined a computation as
an iteration of reduction functions over a domain. He has also demonstrated the
need for integrating static properties of reduction functions (commutativity
and semi-commutativity) to design specialized algorithms such as AC3 and DAC.
We introduce here a set of operators for modeling compositions of reduction
functions. Two of the major goals are to tackle parallel computations, and
dynamic behaviours (such as slow convergence).
|
On Properties of Update Sequences Based on Causal Rejection
|
We consider an approach to update nonmonotonic knowledge bases represented as
extended logic programs under answer set semantics. New information is
incorporated into the current knowledge base subject to a causal rejection
principle enforcing that, in case of conflicts, more recent rules are preferred
and older rules are overridden. Such a rejection principle is also exploited in
other approaches to update logic programs, e.g., in dynamic logic programming
by Alferes et al. We give a thorough analysis of properties of our approach, to
get a better understanding of the causal rejection principle. We review
postulates for update and revision operators from the area of theory change and
nonmonotonic reasoning, and some new properties are considered as well. We then
consider refinements of our semantics which incorporate a notion of minimality
of change. As well, we investigate the relationship to other approaches,
showing that our approach is semantically equivalent to inheritance programs by
Buccafurri et al. and that it coincides with certain classes of dynamic logic
programs, for which we provide characterizations in terms of graph conditions.
Therefore, most of our results about properties of causal rejection principle
apply to these approaches as well. Finally, we deal with computational
complexity of our approach, and outline how the update semantics and its
refinements can be implemented on top of existing logic programming engines.
|
Gradient-based Reinforcement Planning in Policy-Search Methods
|
We introduce a learning method called ``gradient-based reinforcement
planning'' (GREP). Unlike traditional DP methods that improve their policy
backwards in time, GREP is a gradient-based method that plans ahead and
improves its policy before it actually acts in the environment. We derive
formulas for the exact policy gradient that maximizes the expected future
reward and confirm our ideas with numerical experiments.
|
Rational Competitive Analysis
|
Much work in computer science has adopted competitive analysis as a tool for
decision making under uncertainty. In this work we extend competitive analysis
to the context of multi-agent systems. Unlike classical competitive analysis
where the behavior of an agent's environment is taken to be arbitrary, we
consider the case where an agent's environment consists of other agents. These
agents will usually obey some (minimal) rationality constraints. This leads to
the definition of rational competitive analysis. We introduce the concept of
rational competitive analysis, and initiate the study of competitive analysis
for multi-agent systems. We also discuss the application of rational
competitive analysis to the context of bidding games, as well as to the
classical one-way trading problem.
|
A theory of experiment
|
This article aims at clarifying the language and practice of scientific
experiment, mainly by hooking observability on calculability.
|
Nonmonotonic Reasoning, Preferential Models and Cumulative Logics
|
Many systems that exhibit nonmonotonic behavior have been described and
studied already in the literature. The general notion of nonmonotonic
reasoning, though, has almost always been described only negatively, by the
property it does not enjoy, i.e. monotonicity. We study here general patterns
of nonmonotonic reasoning and try to isolate properties that could help us map
the field of nonmonotonic reasoning by reference to positive properties. We
concentrate on a number of families of nonmonotonic consequence relations,
defined in the style of Gentzen. Both proof-theoretic and semantic points of
view are developed in parallel. The former point of view was pioneered by D.
Gabbay, while the latter has been advocated by Y. Shoham in. Five such families
are defined and characterized by representation theorems, relating the two
points of view. One of the families of interest, that of preferential
relations, turns out to have been studied by E. Adams. The "preferential"
models proposed here are a much stronger tool than Adams' probabilistic
semantics. The basic language used in this paper is that of propositional
logic. The extension of our results to first order predicate calculi and the
study of the computational complexity of the decision problems described in
this paper will be treated in another paper.
|
What does a conditional knowledge base entail?
|
This paper presents a logical approach to nonmonotonic reasoning based on the
notion of a nonmonotonic consequence relation. A conditional knowledge base,
consisting of a set of conditional assertions of the type "if ... then ...",
represents the explicit defeasible knowledge an agent has about the way the
world generally behaves. We look for a plausible definition of the set of all
conditional assertions entailed by a conditional knowledge base. In a previous
paper, S. Kraus and the authors defined and studied "preferential" consequence
relations. They noticed that not all preferential relations could be considered
as reasonable inference procedures. This paper studies a more restricted class
of consequence relations, "rational" relations. It is argued that any
reasonable nonmonotonic inference procedure should define a rational relation.
It is shown that the rational relations are exactly those that may be
represented by a "ranked" preferential model, or by a (non-standard)
probabilistic model. The rational closure of a conditional knowledge base is
defined and shown to provide an attractive answer to the question of the title.
Global properties of this closure operation are proved: it is a cumulative
operation. It is also computationally tractable. This paper assumes the
underlying language is propositional.
|
A note on Darwiche and Pearl
|
It is shown that Darwiche and Pearl's postulates imply an interesting
property, not noticed by the authors.
|
Distance Semantics for Belief Revision
|
A vast and interesting family of natural semantics for belief revision is
defined. Suppose one is given a distance d between any two models. One may then
define the revision of a theory K by a formula a as the theory defined by the
set of all those models of a that are closest, by d, to the set of models of K.
This family is characterized by a set of rationality postulates that extends
the AGM postulates. The new postulates describe properties of iterated
revisions.
|
Preferred History Semantics for Iterated Updates
|
We give a semantics to iterated update by a preference relation on possible
developments. An iterated update is a sequence of formulas, giving (incomplete)
information about successive states of the world. A development is a sequence
of models, describing a possible trajectory through time. We assume a principle
of inertia and prefer those developments, which are compatible with the
information, and avoid unnecessary changes. The logical properties of the
updates defined in this way are considered, and a representation result is
proved.
|
Nonmonotonic inference operations
|
A. Tarski proposed the study of infinitary consequence operations as the
central topic of mathematical logic. He considered monotonicity to be a
property of all such operations. In this paper, we weaken the monotonicity
requirement and consider more general operations, inference operations. These
operations describe the nonmonotonic logics both humans and machines seem to be
using when infering defeasible information from incomplete knowledge. We single
out a number of interesting families of inference operations. This study of
infinitary inference operations is inspired by the results of Kraus, Lehmann
and Magidor on finitary nonmonotonic operations, but this paper is
self-contained.
|
The logical meaning of Expansion
|
The Expansion property considered by researchers in Social Choice is shown to
correspond to a logical property of nonmonotonic consequence relations that is
the {\em pure}, i.e., not involving connectives, version of a previously known
weak rationality condition. The assumption that the union of two definable sets
of models is definable is needed for the soundness part of the result.
|
Another perspective on Default Reasoning
|
The lexicographic closure of any given finite set D of normal defaults is
defined. A conditional assertion "if a then b" is in this lexicographic closure
if, given the defaults D and the fact a, one would conclude b. The
lexicographic closure is essentially a rational extension of D, and of its
rational closure, defined in a previous paper. It provides a logic of normal
defaults that is different from the one proposed by R. Reiter and that is rich
enough not to require the consideration of non-normal defaults. A large number
of examples are provided to show that the lexicographic closure corresponds to
the basic intuitions behind Reiter's logic of defaults.
|
Deductive Nonmonotonic Inference Operations: Antitonic Representations
|
We provide a characterization of those nonmonotonic inference operations C
for which C(X) may be described as the set of all logical consequences of X
together with some set of additional assumptions S(X) that depends
anti-monotonically on X (i.e., X is a subset of Y implies that S(Y) is a subset
of S(X)). The operations represented are exactly characterized in terms of
properties most of which have been studied in Freund-Lehmann(cs.AI/0202031).
Similar characterizations of right-absorbing and cumulative operations are also
provided. For cumulative operations, our results fit in closely with those of
Freund. We then discuss extending finitary operations to infinitary operations
in a canonical way and discuss co-compactness properties. Our results provide a
satisfactory notion of pseudo-compactness, generalizing to deductive
nonmonotonic operations the notion of compactness for monotonic operations.
They also provide an alternative, more elegant and more general, proof of the
existence of an infinitary deductive extension for any finitary deductive
operation (Theorem 7.9 of Freund-Lehmann).
|
Stereotypical Reasoning: Logical Properties
|
Stereotypical reasoning assumes that the situation at hand is one of a kind
and that it enjoys the properties generally associated with that kind of
situation. It is one of the most basic forms of nonmonotonic reasoning. A
formal model for stereotypical reasoning is proposed and the logical properties
of this form of reasoning are studied. Stereotypical reasoning is shown to be
cumulative under weak assumptions.
|
A Framework for Compiling Preferences in Logic Programs
|
We introduce a methodology and framework for expressing general preference
information in logic programming under the answer set semantics. An ordered
logic program is an extended logic program in which rules are named by unique
terms, and in which preferences among rules are given by a set of atoms of form
s < t where s and t are names. An ordered logic program is transformed into a
second, regular, extended logic program wherein the preferences are respected,
in that the answer sets obtained in the transformed program correspond with the
preferred answer sets of the original program. Our approach allows the
specification of dynamic orderings, in which preferences can appear arbitrarily
within a program. Static orderings (in which preferences are external to a
logic program) are a trivial restriction of the general dynamic case. First, we
develop a specific approach to reasoning with preferences, wherein the
preference ordering specifies the order in which rules are to be applied. We
then demonstrate the wide range of applicability of our framework by showing
how other approaches, among them that of Brewka and Eiter, can be captured
within our framework. Since the result of each of these transformations is an
extended logic program, we can make use of existing implementations, such as
dlv and smodels. To this end, we have developed a publicly available compiler
as a front-end for these programming systems.
|
Two results for proiritized logic programming
|
Prioritized default reasoning has illustrated its rich expressiveness and
flexibility in knowledge representation and reasoning. However, many important
aspects of prioritized default reasoning have yet to be thoroughly explored. In
this paper, we investigate two properties of prioritized logic programs in the
context of answer set semantics. Specifically, we reveal a close relationship
between mutual defeasibility and uniqueness of the answer set for a prioritized
logic program. We then explore how the splitting technique for extended logic
programs can be extended to prioritized logic programs. We prove splitting
theorems that can be used to simplify the evaluation of a prioritized logic
program under certain conditions.
|
Belief Revision and Rational Inference
|
The (extended) AGM postulates for belief revision seem to deal with the
revision of a given theory K by an arbitrary formula, but not to constrain the
revisions of two different theories by the same formula. A new postulate is
proposed and compared with other similar postulates that have been proposed in
the literature. The AGM revisions that satisfy this new postulate stand in
one-to-one correspondence with the rational, consistency-preserving relations.
This correspondence is described explicitly. Two viewpoints on iterative
revisions are distinguished and discussed.
|
Ultimate approximations in nonmonotonic knowledge representation systems
|
We study fixpoints of operators on lattices. To this end we introduce the
notion of an approximation of an operator. We order approximations by means of
a precision ordering. We show that each lattice operator O has a unique most
precise or ultimate approximation. We demonstrate that fixpoints of this
ultimate approximation provide useful insights into fixpoints of the operator
O.
We apply our theory to logic programming and introduce the ultimate
Kripke-Kleene, well-founded and stable semantics. We show that the ultimate
Kripke-Kleene and well-founded semantics are more precise then their standard
counterparts We argue that ultimate semantics for logic programming have
attractive epistemological properties and that, while in general they are
computationally more complex than the standard semantics, for many classes of
theories, their complexity is no worse.
|
Handling Defeasibilities in Action Domains
|
Representing defeasibility is an important issue in common sense reasoning.
In reasoning about action and change, this issue becomes more difficult because
domain and action related defeasible information may conflict with general
inertia rules. Furthermore, different types of defeasible information may also
interfere with each other during the reasoning. In this paper, we develop a
prioritized logic programming approach to handle defeasibilities in reasoning
about action. In particular, we propose three action languages {\cal AT}^{0},
{\cal AT}^{1} and {\cal AT}^{2} which handle three types of defeasibilities in
action domains named defeasible constraints, defeasible observations and
actions with defeasible and abnormal effects respectively. Each language with a
higher superscript can be viewed as an extension of the language with a lower
superscript. These action languages inherit the simple syntax of {\cal A}
language but their semantics is developed in terms of transition systems where
transition functions are defined based on prioritized logic programs. By
illustrating various examples, we show that our approach eventually provides a
powerful mechanism to handle various defeasibilities in temporal prediction and
postdiction. We also investigate semantic properties of these three action
languages and characterize classes of action domains that present more
desirable solutions in reasoning about action within the underlying action
languages.
|
Anticipatory Guidance of Plot
|
An anticipatory system for guiding plot development in interactive narratives
is described. The executable model is a finite automaton that provides the
implemented system with a look-ahead. The identification of undesirable future
states in the model is used to guide the player, in a transparent manner. In
this way, too radical twists of the plot can be avoided. Since the player
participates in the development of the plot, such guidance can have many forms,
depending on the environment of the player, on the behavior of the other
players, and on the means of player interaction. We present a design method for
interactive narratives which produces designs suitable for the implementation
of anticipatory mechanisms. Use of the method is illustrated by application to
our interactive computer game Kaktus.
|
Abduction, ASP and Open Logic Programs
|
Open logic programs and open entailment have been recently proposed as an
abstract framework for the verification of incomplete specifications based upon
normal logic programs and the stable model semantics. There are obvious
analogies between open predicates and abducible predicates. However, despite
superficial similarities, there are features of open programs that have no
immediate counterpart in the framework of abduction and viceversa. Similarly,
open programs cannot be immediately simulated with answer set programming
(ASP). In this paper we start a thorough investigation of the relationships
between open inference, abduction and ASP. We shall prove that open programs
generalize the other two frameworks. The generalized framework suggests
interesting extensions of abduction under the generalized stable model
semantics. In some cases, we will be able to reduce open inference to abduction
and ASP, thereby estimating its computational complexity. At the same time, the
aforementioned reduction opens the way to new applications of abduction and
ASP.
|
Domain-Dependent Knowledge in Answer Set Planning
|
In this paper we consider three different kinds of domain-dependent control
knowledge (temporal, procedural and HTN-based) that are useful in planning. Our
approach is declarative and relies on the language of logic programming with
answer set semantics (AnsProlog*). AnsProlog* is designed to plan without
control knowledge. We show how temporal, procedural and HTN-based control
knowledge can be incorporated into AnsProlog* by the modular addition of a
small number of domain-dependent rules, without the need to modify the planner.
We formally prove the correctness of our planner, both in the absence and
presence of the control knowledge. Finally, we perform some initial
experimentation that demonstrates the potential reduction in planning time that
can be achieved when procedural domain knowledge is used to solve planning
problems with large plan length.
|
"Minimal defence": a refinement of the preferred semantics for
argumentation frameworks
|
Dung's abstract framework for argumentation enables a study of the
interactions between arguments based solely on an ``attack'' binary relation on
the set of arguments. Various ways to solve conflicts between contradictory
pieces of information have been proposed in the context of argumentation,
nonmonotonic reasoning or logic programming, and can be captured by appropriate
semantics within Dung's framework. A common feature of these semantics is that
one can always maximize in some sense the set of acceptable arguments. We
propose in this paper to extend Dung's framework in order to allow for the
representation of what we call ``restricted'' arguments: these arguments should
only be used if absolutely necessary, that is, in order to support other
arguments that would otherwise be defeated. We modify Dung's preferred
semantics accordingly: a set of arguments becomes acceptable only if it
contains a minimum of restricted arguments, for a maximum of unrestricted
arguments.
|
Two Representations for Iterative Non-prioritized Change
|
We address a general representation problem for belief change, and describe
two interrelated representations for iterative non-prioritized change: a
logical representation in terms of persistent epistemic states, and a
constructive representation in terms of flocks of bases.
|
Collective Argumentation
|
An extension of an abstract argumentation framework, called collective
argumentation, is introduced in which the attack relation is defined directly
among sets of arguments. The extension turns out to be suitable, in particular,
for representing semantics of disjunctive logic programs. Two special kinds of
collective argumentation are considered in which the opponents can share their
arguments.
|
Logic Programming with Ordered Disjunction
|
Logic programs with ordered disjunction (LPODs) combine ideas underlying
Qualitative Choice Logic (Brewka et al. KR 2002) and answer set programming.
Logic programming under answer set semantics is extended with a new connective
called ordered disjunction. The new connective allows us to represent
alternative, ranked options for problem solutions in the heads of rules: A
\times B intuitively means: if possible A, but if A is not possible then at
least B. The semantics of logic programs with ordered disjunction is based on a
preference relation on answer sets. LPODs are useful for applications in design
and configuration and can serve as a basis for qualitative decision making.
|
Compilation of Propositional Weighted Bases
|
In this paper, we investigate the extent to which knowledge compilation can
be used to improve inference from propositional weighted bases. We present a
general notion of compilation of a weighted base that is parametrized by any
equivalence--preserving compilation function. Both negative and positive
results are presented. On the one hand, complexity results are identified,
showing that the inference problem from a compiled weighted base is as
difficult as in the general case, when the prime implicates, Horn cover or
renamable Horn cover classes are targeted. On the other hand, we show that the
inference problem becomes tractable whenever DNNF-compilations are used and
clausal queries are considered. Moreover, we show that the set of all preferred
models of a DNNF-compilation of a weighted base can be computed in time
polynomial in the output size. Finally, we sketch how our results can be used
in model-based diagnosis in order to compute the most probable diagnoses of a
system.
|
Modeling Complex Domains of Actions and Change
|
This paper studies the problem of modeling complex domains of actions and
change within high-level action description languages. We investigate two main
issues of concern: (a) can we represent complex domains that capture together
different problems such as ramifications, non-determinism and concurrency of
actions, at a high-level, close to the given natural ontology of the problem
domain and (b) what features of such a representation can affect, and how, its
computational behaviour. The paper describes the main problems faced in this
representation task and presents the results of an empirical study, carried out
through a series of controlled experiments, to analyze the computational
performance of reasoning in these representations. The experiments compare
different representations obtained, for example, by changing the basic ontology
of the domain or by varying the degree of use of indirect effect laws through
domain constraints. This study has helped to expose the main sources of
computational difficulty in the reasoning and suggest some methodological
guidelines for representing complex domains. Although our work has been carried
out within one particular high-level description language, we believe that the
results, especially those that relate to the problems of representation, are
independent of the specific modeling language.
|
Value Based Argumentation Frameworks
|
This paper introduces the notion of value-based argumentation frameworks, an
extension of the standard argumentation frameworks proposed by Dung, which are
able toshow how rational decision is possible in cases where arguments derive
their force from the social values their acceptance would promote.
|
Preferred well-founded semantics for logic programming by alternating
fixpoints: Preliminary report
|
We analyze the problem of defining well-founded semantics for ordered logic
programs within a general framework based on alternating fixpoint theory. We
start by showing that generalizations of existing answer set approaches to
preference are too weak in the setting of well-founded semantics. We then
specify some informal yet intuitive criteria and propose a semantical framework
for preference handling that is more suitable for defining well-founded
semantics for ordered logic programs. The suitability of the new approach is
convinced by the fact that many attractive properties are satisfied by our
semantics. In particular, our semantics is still correct with respect to
various existing answer sets semantics while it successfully overcomes the
weakness of their generalization to well-founded semantics. Finally, we
indicate how an existing preferred well-founded semantics can be captured
within our semantical framework.
|
Embedding Default Logic in Propositional Argumentation Systems
|
In this paper we present a transformation of finite propositional default
theories into so-called propositional argumentation systems. This
transformation allows to characterize all notions of Reiter's default logic in
the framework of argumentation systems. As a consequence, computing extensions,
or determining wether a given formula belongs to one extension or all
extensions can be answered without leaving the field of classical propositional
logic. The transformation proposed is linear in the number of defaults.
|
On the existence and multiplicity of extensions in dialectical
argumentation
|
In the present paper, the existence and multiplicity problems of extensions
are addressed. The focus is on extension of the stable type. The main result of
the paper is an elegant characterization of the existence and multiplicity of
extensions in terms of the notion of dialectical justification, a close cousin
of the notion of admissibility. The characterization is given in the context of
the particular logic for dialectical argumentation DEFLOG. The results are of
direct relevance for several well-established models of defeasible reasoning
(like default logic, logic programming and argumentation frameworks), since
elsewhere dialectical argumentation has been shown to have close formal
connections with these models.
|
Nonmonotonic Probabilistic Logics between Model-Theoretic Probabilistic
Logic and Probabilistic Logic under Coherence
|
Recently, it has been shown that probabilistic entailment under coherence is
weaker than model-theoretic probabilistic entailment. Moreover, probabilistic
entailment under coherence is a generalization of default entailment in System
P. In this paper, we continue this line of research by presenting probabilistic
generalizations of more sophisticated notions of classical default entailment
that lie between model-theoretic probabilistic entailment and probabilistic
entailment under coherence. That is, the new formalisms properly generalize
their counterparts in classical default reasoning, they are weaker than
model-theoretic probabilistic entailment, and they are stronger than
probabilistic entailment under coherence. The new formalisms are useful
especially for handling probabilistic inconsistencies related to conditioning
on zero events. They can also be applied for probabilistic belief revision.
More generally, in the same spirit as a similar previous paper, this paper
sheds light on exciting new formalisms for probabilistic reasoning beyond the
well-known standard ones.
|
Evaluating Defaults
|
We seek to find normative criteria of adequacy for nonmonotonic logic similar
to the criterion of validity for deductive logic. Rather than stipulating that
the conclusion of an inference be true in all models in which the premises are
true, we require that the conclusion of a nonmonotonic inference be true in
``almost all'' models of a certain sort in which the premises are true. This
``certain sort'' specification picks out the models that are relevant to the
inference, taking into account factors such as specificity and vagueness, and
previous inferences. The frequencies characterizing the relevant models reflect
known frequencies in our actual world. The criteria of adequacy for a default
inference can be extended by thresholding to criteria of adequacy for an
extension. We show that this avoids the implausibilities that might otherwise
result from the chaining of default inferences. The model proportions, when
construed in terms of frequencies, provide a verifiable grounding of default
rules, and can become the basis for generating default rules from statistics.
|
Linking Makinson and Kraus-Lehmann-Magidor preferential entailments
|
About ten years ago, various notions of preferential entailment have been
introduced. The main reference is a paper by Kraus, Lehmann and Magidor (KLM),
one of the main competitor being a more general version defined by Makinson
(MAK). These two versions have already been compared, but it is time to revisit
these comparisons. Here are our three main results: (1) These two notions are
equivalent, provided that we restrict our attention, as done in KLM, to the
cases where the entailment respects logical equivalence (on the left and on the
right). (2) A serious simplification of the description of the fundamental
cases in which MAK is equivalent to KLM, including a natural passage in both
ways. (3) The two previous results are given for preferential entailments more
general than considered in some of the original texts, but they apply also to
the original definitions and, for this particular case also, the models can be
simplified.
|
Knowledge Representation
|
This work analyses main features that should be present in knowledge
representation. It suggests a model for representation and a way to implement
this model in software. Representation takes care of both low-level sensor
information and high-level concepts.
|
Causes and Explanations: A Structural-Model Approach. Part II:
Explanations
|
We propose new definitions of (causal) explanation, using structural
equations to model counterfactuals. The definition is based on the notion of
actual cause, as defined and motivated in a companion paper. Essentially, an
explanation is a fact that is not known for certain but, if found to be true,
would constitute an actual cause of the fact to be explained, regardless of the
agent's initial uncertainty. We show that the definition handles well a number
of problematic examples from the literature.
|
Reasoning about Evolving Nonmonotonic Knowledge Bases
|
Recently, several approaches to updating knowledge bases modeled as extended
logic programs have been introduced, ranging from basic methods to incorporate
(sequences of) sets of rules into a logic program, to more elaborate methods
which use an update policy for specifying how updates must be incorporated. In
this paper, we introduce a framework for reasoning about evolving knowledge
bases, which are represented as extended logic programs and maintained by an
update policy. We first describe a formal model which captures various update
approaches, and we define a logical language for expressing properties of
evolving knowledge bases. We then investigate semantical and computational
properties of our framework, where we focus on properties of knowledge states
with respect to the canonical reasoning task of whether a given formula holds
on a given evolving knowledge base. In particular, we present finitary
characterizations of the evolution for certain classes of framework instances,
which can be exploited for obtaining decidability results. In more detail, we
characterize the complexity of reasoning for some meaningful classes of
evolving knowledge bases, ranging from polynomial to double exponential space
complexity.
|
A Comparison of Different Cognitive Paradigms Using Simple Animats in a
Virtual Laboratory, with Implications to the Notion of Cognition
|
In this thesis I present a virtual laboratory which implements five different
models for controlling animats: a rule-based system, a behaviour-based system,
a concept-based system, a neural network, and a Braitenberg architecture.
Through different experiments, I compare the performance of the models and
conclude that there is no "best" model, since different models are better for
different things in different contexts.
The models I chose, although quite simple, represent different approaches for
studying cognition. Using the results as an empirical philosophical aid,
I note that there is no "best" approach for studying cognition, since
different approaches have all advantages and disadvantages, because they study
different aspects of cognition from different contexts. This has implications
for current debates on "proper" approaches for cognition: all approaches are a
bit proper, but none will be "proper enough". I draw remarks on the notion of
cognition abstracting from all the approaches used to study it, and propose a
simple classification for different types of cognition.
|
Revising Partially Ordered Beliefs
|
This paper deals with the revision of partially ordered beliefs. It proposes
a semantic representation of epistemic states by partial pre-orders on
interpretations and a syntactic representation by partially ordered belief
bases. Two revision operations, the revision stemming from the history of
observations and the possibilistic revision, defined when the epistemic state
is represented by a total pre-order, are generalized, at a semantic level, to
the case of a partial pre-order on interpretations, and at a syntactic level,
to the case of a partially ordered belief base. The equivalence between the two
representations is shown for the two revision operations.
|
Can the whole brain be simpler than its "parts"?
|
This is the first in a series of connected papers discussing the problem of a
dynamically reconfigurable universal learning neurocomputer that could serve as
a computational model for the whole human brain. The whole series is entitled
"The Brain Zero Project. My Brain as a Dynamically Reconfigurable Universal
Learning Neurocomputer." (For more information visit the website
www.brain0.com.) This introductory paper is concerned with general methodology.
Its main goal is to explain why it is critically important for both neural
modeling and cognitive modeling to pay much attention to the basic requirements
of the whole brain as a complex computing system. The author argues that it can
be easier to develop an adequate computational model for the whole
"unprogrammed" (untrained) human brain than to find adequate formal
representations of some nontrivial parts of brain's performance. (In the same
way as, for example, it is easier to describe the behavior of a complex
analytical function than the behavior of its real and/or imaginary part.) The
"curse of dimensionality" that plagues purely phenomenological ("brainless")
cognitive theories is a natural penalty for an attempt to represent
insufficiently large parts of brain's performance in a state space of
insufficiently high dimensionality. A "partial" modeler encounters "Catch 22."
An attempt to simplify a cognitive problem by artificially reducing its
dimensionality makes the problem more difficult.
|
Adaptive Development of Koncepts in Virtual Animats: Insights into the
Development of Knowledge
|
As a part of our effort for studying the evolution and development of
cognition, we present results derived from synthetic experimentations in a
virtual laboratory where animats develop koncepts adaptively and ground their
meaning through action. We introduce the term "koncept" to avoid confusions and
ambiguity derived from the wide use of the word "concept". We present the
models which our animats use for abstracting koncepts from perceptions,
plastically adapt koncepts, and associate koncepts with actions. On a more
philosophical vein, we suggest that knowledge is a property of a cognitive
system, not an element, and therefore observer-dependent.
|
Dynamic Adjustment of the Motivation Degree in an Action Selection
Mechanism
|
This paper presents a model for dynamic adjustment of the motivation degree,
using a reinforcement learning approach, in an action selection mechanism
previously developed by the authors. The learning takes place in the
modification of a parameter of the model of combination of internal and
external stimuli. Experiments that show the claimed properties are presented,
using a VR simulation developed for such purposes. The importance of adaptation
by learning in action selection is also discussed.
|
Action Selection Properties in a Software Simulated Agent
|
This article analyses the properties of the Internal Behaviour network, an
action selection mechanism previously proposed by the authors, with the aid of
a simulation developed for such ends. A brief review of the Internal Behaviour
network is followed by the explanation of the implementation of the simulation.
Then, experiments are presented and discussed analysing the properties of the
action selection in the proposed model.
|
A Model for Combination of External and Internal Stimuli in the Action
Selection of an Autonomous Agent
|
This paper proposes a model for combination of external and internal stimuli
for the action selection in an autonomous agent, based in an action selection
mechanism previously proposed by the authors. This combination model includes
additive and multiplicative elements, which allows to incorporate new
properties, which enhance the action selection. A given parameter a, which is
part of the proposed model, allows to regulate the degree of dependence of the
observed external behaviour from the internal states of the entity.
|
Searching for Plannable Domains can Speed up Reinforcement Learning
|
Reinforcement learning (RL) involves sequential decision making in uncertain
environments. The aim of the decision-making agent is to maximize the benefit
of acting in its environment over an extended period of time. Finding an
optimal policy in RL may be very slow. To speed up learning, one often used
solution is the integration of planning, for example, Sutton's Dyna algorithm,
or various other methods using macro-actions.
Here we suggest to separate plannable, i.e., close to deterministic parts of
the world, and focus planning efforts in this domain. A novel reinforcement
learning method called plannable RL (pRL) is proposed here. pRL builds a simple
model, which is used to search for macro actions. The simplicity of the model
makes planning computationally inexpensive. It is shown that pRL finds an
optimal policy, and that plannable macro actions found by pRL are near-optimal.
In turn, it is unnecessary to try large numbers of macro actions, which enables
fast learning. The utility of pRL is demonstrated by computer simulations.
|
Temporal plannability by variance of the episode length
|
Optimization of decision problems in stochastic environments is usually
concerned with maximizing the probability of achieving the goal and minimizing
the expected episode length. For interacting agents in time-critical
applications, learning of the possibility of scheduling of subtasks (events) or
the full task is an additional relevant issue. Besides, there exist highly
stochastic problems where the actual trajectories show great variety from
episode to episode, but completing the task takes almost the same amount of
time. The identification of sub-problems of this nature may promote e.g.,
planning, scheduling and segmenting Markov decision processes. In this work,
formulae for the average duration as well as the standard deviation of the
duration of events are derived. The emerging Bellman-type equation is a simple
extension of Sobel's work (1982). Methods of dynamic programming as well as
methods of reinforcement learning can be applied for our extension. Computer
demonstration on a toy problem serve to highlight the principle.
|
Comparisons and Computation of Well-founded Semantics for Disjunctive
Logic Programs
|
Much work has been done on extending the well-founded semantics to general
disjunctive logic programs and various approaches have been proposed. However,
these semantics are different from each other and no consensus is reached about
which semantics is the most intended. In this paper we look at disjunctive
well-founded reasoning from different angles. We show that there is an
intuitive form of the well-founded reasoning in disjunctive logic programming
which can be characterized by slightly modifying some exisitng approaches to
defining disjunctive well-founded semantics, including program transformations,
argumentation, unfounded sets (and resolution-like procedure). We also provide
a bottom-up procedure for this semantics. The significance of our work is not
only in clarifying the relationship among different approaches, but also shed
some light on what is an intended well-founded semantics for disjunctive logic
programs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.