next up previous contents
Next: The abstract action of Up: The dynamics of knowledge Previous: The dynamics of knowledge   Contents

Explicit knowledge and reasoning actions

Let us consider an inference rule, say $R$. It can be a valid inference rule of classical logic, or some other (non-classical) logic, for example, intuitionistic logic, conditional logic or relevant logic. Assume that an agent accepts $R$ as valid and he can use $R$. What does it mean? In the modal approach we formalize this idea by an axiom saying that the knowledge set of the agent is closed under this rule, that is, if all premises of the rule are known then the conclusion of $R$ is also known. However, as we noted before, it is only true of implicit knowledge. In the context of explicit knowledge it must mean something different. It means rather that, if the agent knows all premises of the rule, and if he perform the inference according to the rule $R$, then he will know the conclusion. The agent does not know the conclusion automatically, but rather as the result of some action, viz. the (mental) action of performing the corresponding inference. If he does not perform this action, then we cannot require him to know the conclusion, although this conclusion may seem to be an obvious consequences of the sentences under consideration.

The same line of argumentation applies to logical axioms, which can be viewed as inference rules without any premises. We cannot require an agent to know all axioms automatically and permanently, he must rather carry out some action before he can acquire knowledge of a certain axiom. Gaining knowledge of other, less obvious theorems is even harder: agents usually need to perform more complex computations in order to establish a theorem. Thus, it is possible that the agent knows all logical truths, but merely in principle. This knowledge is only implicit. In reality he never knows them all at once explicitly.

For formalizing the reasoning actions it is natural to use (a form of) dynamic logic ([Har84], [Gol87], [KT90]; see also appendix A for a brief overview.) We can add a set of basic actions to the language of epistemic logic. The set of formulae now includes formulae like $[R_i]K_i\alpha$ or $\langle R_i
\rangle K_i\alpha$ with the intended meaning: ``always after using rule $R$ (or sometimes after using $R$) the agent $i$ knows $\alpha$''. The formalization of the idea that an agent accepts and is able to use an inference rule is straightforward. For example, the idea that the agent $i$ accepts modus ponens can be formalized by the axiom: $K_i\alpha\land K_i(\alpha\to \beta)\to \langle MP_i \rangle
K_i\beta$. This axiom says no more than if agent $i$ knows $\alpha$ and he also knows that $\alpha$ implies $\beta$, then after a suitable inference step he will know $\beta$.4.1

As the axioms can be viewed as special inference rules we can introduce an action corresponding to each agent and each axiom of the basis logic, which describes the ability of the agent to use this axiom in his reasoning. (In general, different agents may have different logics, so that the sets of basic actions are different for different agents. However, we assume a set of homogeneous agents, for the sake of simplicity.) By means of the familiar program connectives for dynamic logic (such as composition or iteration) we can formalize the idea that the agent may know the consequences of some sentence which he already knows explicitly, provided that he performs the right reasoning steps. For example, assume that the agent $i$ knows the conjunction of $\alpha$ and $\alpha \to \beta$, that is, $K_i(\alpha\land (\alpha\to \beta))$. In all normal modal systems we can deduce $K_i(\alpha\land \beta)$. However, this inference is not sound for actual knowledge of realistic agents. There is no guarantee that the agent will know $\alpha\land\beta$ automatically, as the modal approach suggests. We can only say that if the agent reasons correctly, then he will know $\alpha\land\beta$. In our concrete case, let $CE, CI, MP$ be the conjunction elimination rule, the conjunction introduction rule, and modus ponens, respectively, and let the symbol ``;'' denote the composition of actions. Then our theorem must be: $K_i(\alpha\land (\alpha\to \beta)) \to \langle
CE_i;MP_i;CI_i \rangle K_i(\alpha\land \beta)$, and not $K_i(\alpha\land (\alpha\to \beta)) \to K_i(\alpha\land \beta)$ as in the standard modal approach.

In general, suppose that $\beta$ follows from $\alpha$ in some basis logic (which is accepted by the agent) and that the agent knows $\alpha$. For explicit knowledge we cannot assume that the agent automatically knows $\beta$. Let a proof of $\beta$ from $\alpha$ be given, where the axioms and inference rules used in the proof are $R^1$,...,$R^n$ (in this order, where the same axiom or inference rule may occur at different places in the sequence.) Then, instead of the monotonicity rule in the standard modal approach we have the axiom: $K_i\alpha\to \langle R^1_i;\ldots ;R^n_i \rangle K_i\beta$, where $R^k_i$ is $i$'s reasoning action of applying the inference rule $R^k$ ($k=1,\ldots,n$). This axiom says that if the agent $i$ performs the sequence of actions corresponding to the rules $R^1,\ldots,R^n$ (in this order) then he will know $\beta$ under the given circumstances. Whether or not the agent can come to this conclusion depends crucially on his logical ability. In this way we see that the logical omniscience problem can be solved easily in a natural way: we can describe agents whose knowledge may or may not be closed under logical laws. On the other hand we can still say that the agent thinks rationally, that he is not logically ignorant. Theoretically he may produce all logical truths, and all logical consequences of his knowledge, but only if he is interested in doing so, if he has enough time and memory, et cetera.

In the above argumentation we have made an implicit assumption. We have assumed that all premises, once known by the agent, are still available after the agent performs a reasoning step. In the previous example, if the agent forgets the premise $\alpha$ immediately after using modus ponens, then he cannot apply the conjunction introduction rule to come to the conclusion $\alpha\land\beta$. Thus, we have to postulate that the agent does not forget what he previously knows after performing some reasoning action. This assumption can be formalized using persistence axioms for knowledge, for example, $K_i\alpha\to [R_i]K_i\alpha$.

Are such persistence axioms reasonable? Only under two conditions. First, the truth value of $\alpha$ should not change over time. If $\alpha$ becomes false after $i$'s inference using rule $R$ then it is not reasonable to postulate that $i$ still knows $\alpha$ after the use of $R$. This point should be taken into account when we formally define the language of our logic. In particular, if our language contains temporal indexicals then sentences containing them cannot be regarded as persistent. Second, the truth value of $\alpha$ may not change through the agent's actions. This excludes formulae such that $\lnot K_i\beta$: it is possible that agent $i$ does not know $\beta$ now, but will know it as a result of his reasoning. In general, a formula in which a knowledge operator occurs essentially negative (i.e., within the scope of an odd number of the negation sign) is not a suitable candidate for a persistent one. So, we may assume that persistent formulae are built up from objective formulae using conjunction, disjunction, and the knowledge operators only.


next up previous contents
Next: The abstract action of Up: The dynamics of knowledge Previous: The dynamics of knowledge   Contents
2001-04-05