next up previous contents
Next: Related works Up: Conclusion Previous: Conclusion   Contents

Summary

One of the principal goals of agent theories is to describe realistic, implementable agents, that is, those which have actually been constructed or are at least in principle implementable. That goal cannot be reached if the inherent resource-boundedness of agents is not treated correctly. Since the modal approach to epistemic logic is not suited to formalize resource-bounded reasoning, the issue of resource-boundedness remains one of the main foundational problems of any agent theory that is developed on the basis of modal epistemic logic.

My work is an attempt to provide theories of agency with a more adequate epistemic foundation. It aims at developing theories of mental concepts that make much more realistic assumptions about agents than other theories. The guiding principle of my theory is that the capacities attributed to agents must be empirically verifiable, that is, it must be possible to construct artificial agents which satisfy the specifications determined by the theory. As a consequence, the unrealistic assumption that agents have unlimited reasoning capacities must be rejected.

In my opinion, resource-bounded reasoning cannot be formalized correctly by restricting the agents' rationality. That is, all attempts to model realistic agents by denying them the use of certain logical rules must be regarded unsatisfactory. The lack of resources does not circumvent an agent from using any of his inference rules. What can be restricted is not the number of logical laws but the number of times they can be applied. Therefore, the correct way to formalize resource-boundedness is to model how the availability of resources (or the lack thereof) can influence an agent's computation.

To achieve the goal of describing resource-bounded agents accurately, the cost of reasoning must be taken seriously. In the thesis I have developed a framework for modeling the relationship between knowledge, reasoning, and the availability of resources. I have argued that the correct form of an axiom for epistemic logic should be: if an agent knows all premises of a valid inference rule and if he performs the right reasoning, then he will know the conclusion as well. Because reasoning requires resources, it cannot be safely assumed that the agent can compute his knowledge if he does not have enough resources to perform the required reasoning. I have demonstrated that on the basis of that idea, the problems of traditional approaches can be avoided and rich epistemic logics can be developed which can account adequately for our intuitions about knowledge.

As a first step, in chapter 4 I have investigated how the explicit concept of knowledge can be represented. I have developed systems of explicit knowledge that can solve the logical omniscience problem of epistemic logic and at the same time account for the agents' full rationality. The agents are non-omniscient, because their actual (or explicit) knowledge at a single time point needs not be closed under any logical law. It is even possible that at some information states they do not know any logical truth at all. On the other hand, they are non-ignorant, because they are capable of logical thinking. They can use their reasoning capacities to infer new information from what they already know. Their rationality is not restricted by any artificial, ad hoc postulate saying that their inference mechanisms are incomplete.

In the next step (chapter 5) I have introduced algorithmic knowledge -- a concept of knowledge that is suited for establishing direct relations between and agent's available resources and his knowledge. I have argued that the proposed algorithmic concept of knowledge can serve as a basis for action. The main idea is to consider how much resources an agent will need to compute the answer to a certain query. That question can be answered by combining epistemic logic with a complexity analysis. Following this strategy I have developed systems for reasoning about algorithmic knowledge which can describe non-omniscient, albeit fully rational agents. Moreover, the defined systems have enough expressive power to formalize quantitative constraints.


next up previous contents
Next: Related works Up: Conclusion Previous: Conclusion   Contents
2001-04-05