next up previous contents
Next: Structure of the thesis Up: Introduction Previous: Introduction   Contents


Motivation

The formal analysis of reasoning about knowledge has attracted much attention recently. Epistemic logic was invented in the early 1960's by philosophers as a tool for describing epistemic concepts such as knowledge and belief formally.1.1 At the beginning, the main interest was to find inherent properties of knowledge (and related concepts) and to apply the analysis to epistemology. More recently, researchers from other disciplines such as linguistics, economics, game theory, and computer science have become increasingly interested in reasoning about knowledge. In addition to the more traditional topics, many other questions have become relevant for those who are more interested in applications, e.g., questions about computational complexities or the relationship between and agent's knowledge and his action.

Within computer science, reasoning about knowledge plays an extremely important role in contemporary theories of intelligent agents. In recent years a number of approaches have been proposed in (Distributed) Artificial Intelligence (DAI) to specify rational agents in terms of mental qualities like knowledge, belief, want, goal, commitment, and intention. There is no universally accepted definition of the term ``agent'' in the literature, yet there seem to be a common picture of artificial agents within the DAI community: ``agents'' are, or should be, formal versions of human agents, possessing formal versions of mental attitudes like knowledge, belief, goals. In short, the notion of an ``intentional stance'' ([Den87], [McC79]) is adopted. It has proved possible and useful to characterize agents using those attitudes.1.2 There is no clear consensus in the DAI community about precisely which combination of mental attitudes is best suited to characterizing agents. However, it seems to be an agreement that belief (or knowledge) should be taken as one of the basic notions of the agent theory ([WJ95]).

The emphasis on epistemic concepts is not accidental. First, the role that knowledge plays in decision and action is obvious. Second, knowledge and belief are most intensively studied among all mentalistic concepts. In fact, the other concepts are usually modeled after the way the epistemic ones are modeled. Third, epistemic concepts are arguably among the most fundamental mental notions: many other mentalistic concepts seem to be derivable from the epistemic ones, but not vice versa. For example, an old philosophical thesis states that the concept of desire is reducible to that of belief: an agent desires something if he believes that having it is useful. A discussion of this desire-as-belief thesis can be found in [Lew88], [Lew96]. The normative concepts of obligation and permission could also be reduced to the concept of belief. Anderson's reduction of deontic logic to alethic modal logic (``something is obligatory if and only if not doing it necessarily leads to punishment'', cf. [And58], [And67]) can be interpreted epistemically as: ``something is obligatory if and only if the agent knows (or believes) that not doing it leads to punishment''. It can be shown that under this epistemic interpretation, the deontic axioms can be derived from axioms of epistemic logic.

In short, formal theories of knowledge constitute the most important foundation for theories of agency. Consequently, all strengths and weaknesses of the underlying epistemic theory propagate to the agent theory based on it. We will see that this has important consequences for the suitability of agent theories for characterizing intelligent agents.

Typically, formal theories of agents are used as internal specification languages, i.e., languages used by agents to reason about themselves and about other agents. As such, agent theories must describe agents accurately and realistically. In order to interact with each other, each agent needs an accurate representation of themselves and of other agents, their information states, their preferences et cetera. I shall show that this requirement cannot be met if mainstream epistemic logic is used to model an agent's cognitive state.

The purpose of my thesis is to provide a more suitable epistemic foundation to theories of intelligent agents. I will argue that agent theories need to be based on better logics of knowledge than the ones on which they are based now. The main reason is that agents -- both human and non-human -- are inherently resource-bounded: they cannot perform arbitrarily complex reasoning tasks within constant, limited time. Mainstream modal epistemic logic, however, is not able to account for that resource boundedness. The most obvious indication of this inability is the so-called logical omniscience problem of epistemic logic. I shall show that almost all work that purports to be about knowledge is done under assumptions that are unreasonable for knowledge of realistic, resource-bounded agents. Then I will propose some systems of epistemic logic which can be used for resource-bounded reasoning1.3.


next up previous contents
Next: Structure of the thesis Up: Introduction Previous: Introduction   Contents
2001-04-05