next up previous contents
Next: Future directions Up: Conclusion Previous: Summary   Contents

Related works

My work is complementary to other approaches to resource-bounded agents (e.g., [Kor98]) in the following sense: instead of trying to find near-optimal solutions to some specific problem (or class of problems) I try to model the control mechanism used by an agent to select a suitable action sequence for the given situation. Such a mechanism could be used, e.g., to decide if in a certain situation it is necessary or desirable to trade quality of results for time or other resources.

Most theories of agency have tried to integrate knowledge and time in a single framework. However, in most cases some modal epistemic logic is combined with some temporal logic, yielding a hybrid system that can at best be used to describe implicit knowledge. There are in fact very few works that treat time as a valuable resource for computing knowledge. In the following I shall discuss briefly some other attempts to investigate the relation between knowledge and reasoning actions and the evolution of knowledge over time.

Elgot-Drapkin et. al. ([EDMP91], [NKP94]) developed what they called step-logics (later renamed as active-logics) to model the reasoning of agents over time. The underlying intuition of their approach is that an agent can carry out one step of reasoning at each time step: if two premises of a rule are known at some point then the agent will apply the rule to know the conclusion at the next point. For example, if both $\alpha$ and $\beta$ are known at time $t$ then their conjunction is known at time $t+1$, so the formula $K_i^t\alpha\land K_i^t\beta\to K_i^{t+1}(\alpha\land\beta)$ is assumed as an axion.

Although the step-logics framework takes the cost of reasoning into account, this is not done consequently. Therefore, the assumptions about the agents' reasoning capacities are too strong. For example, the knowledge necessitation rule (``agents believe all logical truths'') and the congruence rule (``agents believe all logically equivalent formulae of his beliefs'') are valid. Moreover, if $\alpha \to \beta$ is provable and $\alpha$ is known at time $t$, then $\beta$ is known at time $t+1$, however complex the derivation of $\beta$ from $\alpha$ may be. Finally, unlimited space and parallelism must be assumed implicitly in order to justify an axiom like $K_i^t\alpha\land K_i^t\beta\to K_i^{t+1}(\alpha\land\beta)$: it is supposed that any logical consequence which can be derived in one step is actually derived after one time unit. Thus, the step-logics framework suffers from several strong forms of logical omniscience.

Stelzner ([Ste84]) discussed a number of epistemic concepts, their role in rational discourses and their dependency on parameters such as time, agent, context. He proposed a family of logics to formalize the concept of a (hypothetical) obligation to defend some asserted sentence. That concept is related to the concepts of knowledge and belief in the following way. In a rational discourse, if an agent asserts some sentence, then he has the obligation to defend it when it is challenged, because he has made public through his assertion that he believes in the sentence. The obligation to defend the sentence is only hypothetical, because it does not arise if the agent's assertion is not challenged.

To qualify as rational, the agents in a discourse must satisfy certain conditions. A rationality postulate may say, for example, that if an agent is obligated to defend $\alpha$ at time $t$ and if $\beta$ can be inferred from $\alpha$ by one inference step, then the agent can be obligated to defend $\beta$ at time $t+1$. Hence, $K_i^t(\alpha\land
\beta)\to K_i^{t+1}\alpha$ is assumed as an axiom. (A time line isomorphic to the set of natural numbers, generated by the consecutive ``moves'' in the discourse, is assumed.) With the aid of such axioms one can classify agents according to their rationality.

If interpreted as logics of knowledge, Stelzner's logics could be regarded as formalizations of the concept of implicit knowledge, but not of explicit knowledge. A statement such as $K_i^t(\alpha\land
\beta)\to K_i^{t+1}\alpha$ may be more acceptable than the axiom $K_i^t(\alpha\land \beta)\to K_i^t(\alpha)$, but it is still too strong for the notion of actual knowledge.

Halpern, Moses, and Vardi ([HMV94]) developed a general framework for modeling knowledge and computation. It is assumed that at any state, an agent has some local data and exactly one local algorithm which is defined for all formulae and always terminates with one of three possible answers ``Yes'', ``No'', or ``?''. Intuitively, ``Yes'' means that the formula in question is the agent's implicit knowledge, ``No'' means that it is not, and the answer ``?'' means that the agent is unable to determine whether or not the formula follows from all what he knows. At any state, the agent is said to know a fact if the output of his local algorithm is ``Yes'' when running on that fact and his local data. In other words, he can compute that he (implicitly) knows that fact. This notion of knowledge is called algorithmic knowledge by the authors. A local algorithm of an agent $i$ is said to be sound if for any formula $\alpha$ and any local data, the answer ``Yes'' implies that $\alpha$ is in fact $i$'s implicit knowledge at the state in consideration, and the answer ``No'' implies that he does not know $\alpha$ implicitly at that state. The algorithm is called complete if it does not return the answer ``?''.

Obviously, if the employed algorithms are not complete then logical omniscience is avoided, so some aspect of resource boundedness can be modeled. The authors view their concept of knowledge as a kind of explicit knowledge. However, an agent cannot really act upon that knowledge immediately because that information must be inferred first. Hence, that kind of knowledge may not suffice to justify an agent's actions if he needs to act before the computation is completed. Moreover, under certain circumstances that concept of knowledge exhibits certain properties of implicit knowledge. In fact, as the authors pointed out, their notion of algorithmic knowledge coincides with implicit knowledge when sound and complete algorithms are employed.

The framework of Halpern et. al. specifies a general epistemic language which can be used to describe a large number of situations where computing knowledge is involved. However, it does not really specify a logic for reasoning about knowledge: because their notion of an algorithm is too general, their class of models is too large, therefore no genuine epistemic statement is valid in all models. There seems to be no easy way to make their concept of knowledge more specific so that epistemic inference relations can be established. My framework differs from that of [HMV94] in that aspect: I have shown that certain epistemic statements are valid for intelligent, resource-bounded agents. The epistemic consequence relations defined in my framework justify inferences about agents once a general rationality assumption has been made.

In the literature on belief revision some authors have considered belief-changing actions. For example, Van Linder, van der Hoek and Meyer ([vLvdHM95a], [vLvdHM95b]) have done some work to formalize the change of knowledge through actions. However, they made very strong assumptions about knowledge: their agents are logically omniscient. The actions they consider lead from one deductively closed belief set to another. Thus, their work should be read in terms of information dynamics, and not knowledge dynamics.


next up previous contents
Next: Future directions Up: Conclusion Previous: Summary   Contents
2001-04-05