next up previous contents
Next: Modeling resources other than Up: Reasoning about algorithmic knowledge Previous: Knowledge and complexity   Contents

Complexity and the lack of knowledge

It is often important to know not only what an agent knows, but also what he does not know within a certain time limit. An agent's lack of knowledge may restrict his choices, so his action could be predicted or explained accordingly. For instance, consider a rational, utility-maximizing agent which must complete a task under certain time constraint. Moreover, suppose that computing a plan for doing that job is relatively easy, but computing the optimal plan is known to be a very hard problem which cannot be accomplished under the given constraint. (Many optimization problems belong to that category.) Then it is rational to find and execute another plan, and not to attempt to compute the optimal plan at all.

Another example may illustrate our considerations. When we use public-key cryptography to encrypt a message, we want to be sure that someone without the secret key will not be able to know its content within reasonable time -- although he can in principle infer it from the public key. The expectation that our message cannot be quickly decrypted is based on the complexity of the reasoning required.

The absence of certain information can be deduced from the presence of other information, utilizing some assumptions about agents' consistency. There is, however, another method for reasoning about the lack of algorithmic knowledge. The expectation that something cannot be known within some time limit is based on the complexity of the reasoning required. We use lower complexity bounds to estimate the least amount of time that an agent would need to infer some sentence, and so to infer what he cannot reliably know within some given limit of time.

For reasoning about what agents cannot infer under some constraints we define for each agent a partial function $f_i^{\prime}:
\mathcal{L}_{N}^{AK} \mapsto \omega$. Intuitively, $f_i^{\prime}(\alpha)$ is the minimal amount of time that $i$ needs to compute $\alpha$. Once such a function has been specified, any formula $\lnot K_i^{f_i^{\prime}(\alpha)} \alpha$ can be assumed, independent of the truth value of $K_i^{\exists}\alpha$. Even if $i$ will eventually succeed to infer $\alpha$, the computation lasts longer than $f_i^{\prime}(\alpha)$.

With the ability to reason about algorithmic knowledge or the lack thereof, agents can develop intelligent inference strategies to solve problems under time constraints. The logics of algorithmic knowledge can be implemented and executed directly. When an agent $i$ has to solve a problem $\alpha$, he checks first if $\alpha$ belongs to a known problem class $\mathcal{C}$. If not, a ``universal problem solver'' (for any problem that can be described in the language) is activated, and $i$ can only hope to find the solution quickly. But if $\alpha \in \mathcal{C}$, $i$ may estimate its complexity and then decide if the optimal solution can be obtained in time or if some heuristics is needed. Based on that information he can then choose a procedure to solve $\alpha$. Other agents can also reason about $i$ and about the problems he has to solve to explain or predict his actions accordingly.


next up previous contents
Next: Modeling resources other than Up: Reasoning about algorithmic knowledge Previous: Knowledge and complexity   Contents
2001-04-05