next up previous contents
Next: Explicit knowledge Up: Other models of knowledge Previous: Awareness   Contents

Logical omniscience vs. logical ignorance

In the previous section several attempts to develop logics of explicit knowledge have been reviewed. The proposed approaches try to avoid logical omniscience by considering agents with less deductive powers than those suggested by modal systems. By weakening epistemic logic the LOP can be solved, at least to some extent, and an agent's explicit knowledge can be described more realistically. Weak epistemic logics can be used to describe agents with very restricted reasoning capacities. In fact, many of the discussed approaches can even model agents who know nothing (``total ignoramusses''), those who cannot draw even the most elementary inferences (``complete idiots''), and those who believe nothing but contradictions (``ultimate fools'').

Those irrational agents are clearly not very interesting. To describe more intelligent agents, the common way is to postulate axioms which describe the regularities of an agent's knowledge. Such axioms usually require that an agent's belief set, i.e., the set of formulae that he believes, is closed under certain logical laws. In this way the intuitive idea that the agent under consideration is somehow rational could be captured. The epistemic axioms are generally of the form: if all premises of a certain valid inference rule are known, then the conclusion is known. (This is also the general form of a theorem of a modal epistemic logic.) The more axioms are postulated, the more rational is the agent. In this way subsystems of the logic K$_N$ can be obtained which do not suffer from the LOP and may describe agents more realistically than the modal systems. So, existing logics of explicit knowledge typically contain a subset of the axioms and rules for knowledge in the system K$_N$, while other rules are rejected.

The strategy of employing weak epistemic logics for describing explicit knowledge can solve the logical omniscience problem, at least to some extent. However, other serious problems arise. Here I shall not discuss in details the specific problems of the various frameworks or try to solve them. I shall rather present a more fundamental criticism of the strategy of weakening epistemic logic and discuss the problems which arise when this strategy is pursued.

The use of subsystems of normal modal logics to describe explicit knowledge depends on the following assumption. Although it normally takes some effort to make a piece of implicit knowledge explicit, there are ``obvious'' logical consequences that should be recognized easily by rational agents. Therefore, it may be supposed that an agent's knowledge set at a time is always closed under those principles, although it is not closed under all logical laws. In other words, only closure properties corresponding to the ``simple'' inferences are regarded valid. So, the task of developing logics of explicit knowledge involves that of identifying ``obvious'' tautologies and logical inferences.

This assumption is far from plausible. However weak the epistemic postulates may be, they may still be too strong, at least for some agents. Moreover, a single axiom may seem innocuous, but joining it with other axioms may result in a rather powerful deductive system -- and if a sentence can only be deduced by means of a powerful logic, then it is hardly justifiable to call it explicit knowledge. But even if the above assumption is accepted, many problems remain to be solved.

The first challenge is to select a set of postulates for knowledge which may be assumed to be valid for any rational agent. This is not at all an obvious choice. There is no objective, generally accepted criterion for deciding which tautologies are obvious, which inferences are simple. Many criteria for identifying obvious consequences could be considered (and have been proposed). For example, one might maintain that obvious tautologies should be provable in less than $n$ steps, where $n$ is a fairly small natural number. One could restrict the modal depth of knowledge formulae to a small number. One could also demand that only consequences that are built up from subformulae of the premises can be drawn. However, none of these criteria is wholly convincing. The proof length is not a suitable measure for the simplicity of a tautology because it depends on the syntactical system being used. Many theorems have modal depth 1 and satisfy the subformulae condition, but they are still far from obvious, so neither the modal depth nor the subformulae condition is an adequate criterion. Therefore it is not possible to draw a sharp line between ``simple rules'' that should be usable by all rational agents and ``complicated inferences'' which cannot be assumed to be valid.

Another challenge is to find an appropriate way for modeling non-omniscient agents without making them logically ignorant. In order to account for the resource-boundedness of agents, their reasoning powers must be kept reasonably weak. However, if the use of certain inference rules is denied then the resulting logics may become too weak for many applications. Surely, logical omniscience must be avoided. But at the same time we are interested in having epistemic logics which are strong enough to allow sufficiently many conclusions to be drawn from a given set of facts about an agent's propositional attitudes. To interact with other agents, an agent needs to make assumptions about their rationality, and he should be able to assume that they are not logically ignorant. We want to model agents who know at least a large class of logical truths, and can draw sufficiently many conclusions from their knowledge.

The dilemma between logical omniscience and logical ignorance explains why modal epistemic logics are still widely used in agent theories despite the facts that implicit knowledge is useless when agents need to act and logics of explicit knowledge are readily available. The existing logics of explicit knowledge are not suited to characterizing agents because we want to model rational, intelligent agents, and not ``complete idiots''. They avoid logical omniscience, but they cannot offer anything what can account for the rationality of agents. Surely agents are not perfectly rational, yet they are rational. Facing the choice between ``perfectly rational agents'' and ``complete idiots'', agent theorists understandably opt for the former and use logics of implicit knowledge for modeling their agents, hoping that such logics can describe ``almost correctly'' what agents actually know.

The assumptions underlying the use of modal epistemic logics may be justified in some simple domains (``small worlds'', ``toy examples''), where the reasoning tasks involved are quite simple, where the decision process is not very complex, or when the time available is unlimited. In such simple domains, it can be assumed that whenever an agent needs some (implicitly available) information, he can perform the necessary inferences to have the information explicitly. However, such an assumption is not justified in more complex applications. Agents normally have to act under tight time constraints, their decisions what actions to be performed depend strongly on their actual knowledge, and the reasoning needed for making correct choices can be very complex and time-consuming. For example, calculating the shortest tour linking all towns in a region, computing the winning strategy in chess, and inferring the answer to a query from a given database are all very hard problems. It is obvious that modal epistemic logics and other logics of implicit knowledge cannot describe correctly what agents actually know in such applications. To describe agents realistically in knowledge-intensive applications, we simply need other logics of knowledge.

What properties should a logic of knowledge have if it is to be useful for describing realistic, implementable agents? The first obvious requirement is that it must not suffer from the LOP. That is, it must not make unrealistic assumptions about the computational capabilities of agents. An epistemic principle can be regarded to be realistic if it can be confirmed empirically. For our purposes we shall employ the criterion that an agent can be implemented which constitute a model of the principle. Because agents can only handle a limited amount of information, we shall deal with agents whose explicit knowledge can be represented as a finite set of formulae and whose reasoning mechanism contains a finite number of inference procedures. This finiteness condition ensures that agents can be implemented.

Solving the LOP is necessary, but not sufficient for making a logic suitable for reasoning about knowledge. There are other requirements that must be fulfilled. It is important that the logic can do justice to the intuition that agents are rational: although the agents do not automatically know all consequences of their knowledge, they are in principle able to do so. Because of this rationality the agents are able to act upon their knowledge: they can answer questions based on their knowledge, they can plan their actions in advance, they can predict what other agents can and will do, and so on. If a logic cannot account for the agents' rationality, then there is hardly any justification at all to call it a logic of knowledge.

Another important requirement is that the logic be expressive enough to formalize ``interesting'' situations. This condition must remain somewhat vague, because different applications will require different expressive powers of the logic. However, we should keep in mind that the complexity of a logic generally increases with its expressive power, so we must try to find a good trade-off between expressiveness and simplicity.

The next two chapters describe some ways to model agents which are neither logically omniscient nor logically ignorant. In chapter 4 I shall show how explicit knowledge can be modeled without restricting the agents' rationality arbitrarily by denying them the use of certain inference rules. For modeling resource-bounded reasoning, what should be restricted is not the number of admissible inference rules, but the number of times they can be applied. I will show in chapter 5 how epistemic logic can be combined with a complexity analysis to describe resource-bounded reasoning more accurately.


next up previous contents
Next: Explicit knowledge Up: Other models of knowledge Previous: Awareness   Contents
2001-04-05