next up previous contents
Next: Propositional modal, temporal, and Up: Conclusion Previous: Related works   Contents

Future directions

The proof systems defined for explicit knowledge (chapter 4) and algorithmic knowledge (chapter 5) provide a procedural semantics for these concepts. It remains an open problem to develop an intuitively acceptable declarative semantics for them. Although it is possible to develop a model theory along the lines of [Ho95] and to prove completeness of the specified systems with respect to those models, such a model theory is simply a reformulation of the procedural semantics and not very useful. It does not provide us with a tool to determine if all (semantically) valid epistemic statements have been captured by the proof system, and it does not allow us to analyze the concepts of knowledge and belief in terms of simpler, more fundamental concepts. But this is exactly what a well-motivated and intuitively acceptable semantics should do.

Semantics has also proved helpful for studying the complexity of modal epistemic logic. The complexity analysis for the specified logics of explicit and algorithmic knowledge remains an open issue. It is hoped that their complexities can be determined by embedding them into systems whose complexities are known.

The investigated logics of explicit knowledge (DEK$_N$ and its extensions) have been monotonic in two aspects. First, their consequence operations are monotonic. Second, the knowledge of the agents always grows over time. A very interesting, still open problem is to develop logics of explicit knowledge based on non-monotonic logic, where the agents can revise their knowledge when they find out that their knowledge is inconsistent. We may expect to find interesting connections with two other, very active fields of AI research, viz. to non-monotonic reasoning and to the logic of belief revision. This seems to be a promising field of research and needs further investigations.

My study of algorithmic knowledge has concentrated on a single time point in the real time line. A formula like $K_i^n\alpha$ is primarily a statement about the agent's $i$ current ability to compute knowledge. It does not say anything about his actual knowledge $n$ time units from now -- except for the case $n=0$. An interesting problem is to relate an agent's algorithmic knowledge at different time points to each other. Let me elaborate that issue in some more details.

Let the integers be the representation of the objective time structure (``real time line'', ``real world history''). For any integer $t$ we shall write $K_{i,t}^n\alpha$ instead of $K_i^n\alpha$ to make clear that $t$ is the time point under consideration. That is, if at time $t$ the agent $i$ starts computing $\alpha$ then he will needs at most $n$ time units to complete the task, so he will succeed at $t+n$ at the latest. The question is how the inference relations between an agent's algorithmic knowledge at two different points can be formalized.

Consider the simple case $n=0$ first. Can we assume that an agent's explicit knowledge persists over time? For example, is the formula $K_{i,t}^0\alpha \to K_{i,t+1}^0\alpha$ valid? As I have argued in chapter 4, such a persistence axiom cannot be assumed for all formulae. So we may ask under which circumstances those persistence axioms are valid, or if certain default inference rules can be used for reasoning about the evolution of explicit knowledge over time.

A related question is how an agent's reasoning capacities change over time. If at time $t$ he needs $n$ time units to compute $\alpha$, will it remain true at time $t+1$? That is, is the formula $K_{i,t}^n\alpha
\to K_{i,t+1}^n\alpha$ valid? This is apparently not true universally, since the truth-value of $\alpha$ may not be the same at different states. Moreover, at time $t+1$ the agent $i$ may need more time to compute the same query. Nevertheless, it seems plausible to assume that an agent's reasoning capacities do not decrease over time. Under this assumption, the mentioned formula may be valid if $\alpha$ has a certain syntactic structure, e.g., if $\alpha$ is objective and does not contain any negation sign.

Finally, the connection between $K_{i,t}^n\alpha$ and $K_{i,t+n}^0\alpha$ is worth examining. The former formula says that $\alpha$ is implicit knowledge of agent $i$ at time $t$, and he has the capacity to make it explicit if he chooses to do so and if he has enough resources to carry out his computation, in this case $n$ time units. The latter formula says that $\alpha$ has actually become explicit knowledge of $i$ at time $t+n$. A framework for reasoning about knowledge and action should be able to formalize the agent's $i$ computation of $\alpha$ between $t$ and $t+n$.

To summarize, an agent's algorithmic knowledge at different moments can be related to each other through certain consequence relations. The concrete conditions under which such inferences may be drawn must be examined carefully. It can be expected that default logic and other mechanisms of non-monotonic reasoning can help to specify consequence relations for algorithmic knowledge.


next up previous contents
Next: Propositional modal, temporal, and Up: Conclusion Previous: Related works   Contents
2001-04-05