next up previous contents
Next: Logics of algorithmic knowledge Up: Reasoning about algorithmic knowledge Previous: Reasoning about algorithmic knowledge   Contents

Axioms for algorithmic knowledge

Let us assume that an agent $i$ knows $\alpha$ within $m$ units of time, i.e., he needs $m$ time units to infer $\alpha$. Then naturally he is able to do it when even more time is available. So we can take as axiom any formula $K_i^m\alpha \to K_i^n\alpha$ where $m<n$. Note that this axiom does not say that knowledge is persistent in the sense that once established it will be available henceforth. The formula $K_i^m\alpha$ does not entails that $i$ will know $\alpha$ at time point $m$. It does not even imply that $\alpha$ will eventually be known at all. In this aspect the present approach makes a more realistic assumption than the persistence axiom in chapter 4.

We have remarked previously that the formula $K_i^n\alpha$ contains more information than $K_i^{\exists}\alpha$. While the latter formula only says that agent $i$ is able to derive $\alpha$ in finite time if he chooses to, the former one also specifies the amount of time needed by $i$ to complete that task. Thus, the implication $K_i^n\alpha \to
K_i^{\exists} \alpha$ can be assumed as an axiom.

Let $\alpha$ be a provable formula of the logic used by agent $i$. We have argued previously that it cannot be assumed that $i$ knows $\alpha$ automatically (i.e., without any reasoning.) However, he may know it after some course of reasoning. The interesting question is whether or not he is able to compute $\alpha$ reliably within finite time. That would be the case if the agent has a general strategy which fulfills the following two requirements. First, it selects for any formula of the language an algorithm to compute that formula. Second, if the formula is provable then the selected algorithm will terminate with the correct answer. I shall argue that under reasonably weak assumptions, such a strategy exists and can be adopted by any intelligent agent, so that $K_i^{\exists}\alpha$ can be safely postulated if $\alpha$ is provable.

The set of axioms that any agent presupposes is decidable -- in the normal case even finite. Because the number of permissible inference rules is also limited, all proofs can be generated algorithmically. Hence, there is a general-purpose theorem prover that can validate any theorem in finitely many steps. If the agent's selection mechanism always returns that general algorithm for computing knowledge, he is able to validate every theorem $\alpha$. That is, when presented with a theorem $\alpha$ he can select an algorithm which runs on $\alpha$ and outputs the answer ``Yes'' after a finite number of steps. Although the described strategy (``always use the same algorithm'') satisfies the stated conditions, it may not be the best: specific problems may be solved much more quickly by special algorithms than by a general-purpose theorem prover. Hence, the following would be a more reasonable strategy. First, the agent analyzes the query $\alpha$ and tries to select one of his special algorithms to infer it. If no such algorithm can be found, then the general algorithm is selected. In this way, he can always find an algorithm to verify $\alpha$. (If the selection mechanism is not reliable, i.e., if it could return a wrong algorithm for some queries, then several algorithms can be selected and executed concurrently or interleavingly.)

A strategy to successfully prove every provable formula can be acquired by rational agents. An intelligent agent may learn to use some algorithms to compute knowledge. Such algorithms (together with a suitable selection scheme) can be built into artificial agents. Hence, the rule to infer $K_i^{\exists}\alpha$ from $\alpha$ can be assumed to be valid.

The statement $K_i^{\exists}\alpha$ contains some uncertainty. It is not clear how long agent $i$ will need to infer $\alpha$. Can a more definite statement be made? That is, can a natural number $n$ (which typically depends on the structure of the theorem $\alpha$) be determined such that $K_i^n\alpha$ can be assumed as a postulate? The discussion of this question will be postponed until section 5.2.3.

Now suppose that $\avec{\alpha}{\land}{n}\to \beta$ is provable and that each of the formulae $\avec{\alpha}{,}{n}$ can be computed reliably by an agent $i$ in finite time, i.e., $K_i^{\exists}\alpha_1,
\ldots, K_i^{\exists}\alpha_n$ are regarded to be true. Is it reasonable to infer that $i$ can compute $\beta$ reliably if he chooses to derive it? I shall argue that the conclusion $K_{i}^{\exists}\beta$ can be justified.

When presented with a question $\beta$, an agent $i$ naturally attempts to derive $\beta$ from all what he knows5.3. It is reasonable to assume that the consequence relation used by a rational agent has a certain transitivity property: if all the formulae $\avec{\alpha}{,}{n}$ are derivable from some knowledge base and $\beta$ can be inferred from $\avec{\alpha}{,}{n}$, then $\beta$ can also be inferred from that knowledge base. Thus we can assume that $\beta$ follows from all what $i$ knows. Because agents are assumed to process only a limited amount of information, every consequence of their knowledge can be computed algorithmically. With a suitable selection strategy, e.g., one of the strategies outlined previously, an agent can find an algorithm to compute his knowledge successfully. Consequently, agent $i$ is able to compute $\beta$ after a finite number of steps. So, if $\avec{\alpha}{\land}{n}\to \beta$ is a theorem then we can assume that $\avec{K_i^{\exists}\alpha}{\land}{n} \to K_{i}^{\exists}\beta$ is valid.

As a special case we can assume that $K_i^{\exists} \alpha \land
K_i^{\exists} (\alpha \to \beta) \to K_i^{\exists} \beta$ is valid, which says that agent $i$ can use modus ponens in his reasoning: if he can derive both $\alpha$ and $\alpha \to \beta$ then he is also able to derive $\beta$. Because explicit knowledge implies implicit knowledge, the formula $\avec{K_i^0\alpha}{\land}{n} \to
K_{i}^{\exists}\beta$ is valid, provided that $\avec{\alpha}{\land}{n}\to \beta$ is a theorem. Thus, agents are able to compute all logical consequences of their explicit knowledge.

Recall that in chapter 4, the persistence axiom (``everything that has been once deduced will be available henceforth'') plays a prominent role in justifying the postulates stating that agents are able to use logical laws in their reasoning. In the current approach such an assumption is not necessary because we argue at a higher abstraction level. Only the final result is important, not the intermediate ones.

Since the formula $K_i^{\exists} \alpha \to \alpha$ is merely a definitional stipulation, it seems uncontroversial. If that axiom is adopted, the formula $K_i^n \alpha \to \alpha$ can be proved for all natural numbers $n$. More interesting are variants of the consistency axiom. A weak consistency criterion is that agents do not believe apparent contradictions, i.e., their explicit knowledge is consistent: $K_i^0\alpha \to \lnot K_i^0\lnot \alpha$. A stronger requirement is that an agent's implicit knowledge be consistent, which is captured by the schema $K_i^{\exists} \alpha \to \lnot K_i^{\exists} \lnot
\alpha$. That is, the agent's explicit knowledge is free of contradictions and his inference procedures are sound, so that consistency is preserved. (The latter formula is indeed a stronger condition than the former one because $K_i^{\exists}\alpha$ follows from $K_i^0\alpha$.)

What about self-introspection? If an agent knows or does not know something explicitly, he only needs to reflect about himself to find it out. The cost of reflection is usually low, so it can be assumed that self-reflection can be performed in constant time. Hence, the formulae $K_i^0 \alpha \to K_i^{1}K_i^0 \alpha$ and $\lnot K_i^0
\alpha \to K_i^{1}\lnot K_i^0 \alpha$ seem plausible.

The situation is quite different when an agent $i$ tries to compute the truth value of $K_i^n\alpha$ where $n>0$. Although he may actually compute $\alpha$ and reflect about that, the result of his computation does not say much about his ability to infer $\alpha$. He may succeed to compute $\alpha$ within $n$ time units, but there is still a chance that this success is only accidental and not reproducible. On the other hand, even if $K_i^n\alpha$ is false (i.e., agent $i$ cannot compute $\alpha$ reliably within $n$ time units), $i$ may still happen to successfully infer $\alpha$ after less than $n$ time units. Consequently, it is not sound to infer that he can or cannot compute $\alpha$ reliably within $n$ time units. So, generally neither $K_i^n \alpha \to K_i^{n+1}K_i^n \alpha$ nor $\lnot
K_i^n \alpha \to K_i^{n+1}\lnot K_i^n \alpha$ is valid. The same argument applies to the formulae $K_i^{\exists} \alpha \to
K_i^{\exists}K_i^{\exists} \alpha$ and $\lnot K_i^{\exists} \alpha \to
K_i^{\exists}\lnot K_i^{\exists} \alpha$. Those principles can only be assumed for agents who know their abilities well. They can, for example, be postulated for an agent who works deterministically. For such an agent, a small number of tests may suffice to determine if he can perform a task under certain conditions.

next up previous contents
Next: Logics of algorithmic knowledge Up: Reasoning about algorithmic knowledge Previous: Reasoning about algorithmic knowledge   Contents