next up previous contents
Next: The dynamics of knowledge Up: diss Previous: Logical omniscience vs. logical   Contents


Explicit knowledge

In the last chapter I have reviewed some prominent attempts to model the notion of explicit knowledge and discussed their main problems. In my opinion, the existing approaches fail to capture explicit knowledge adequately because they try to model entailment relations where none exists, namely within the set of sentences known by an agent at a single time point. Those attempts are doomed to failure because an agent's explicit knowledge at a time is simply not closed under logical laws and therefore cannot be described by any nontrivial logic. Forcing regularities upon an agent's explicit knowledge to make reasoning about it possible is not the proper way to cope with the difficulties.

In the following I shall suggest a new approach to reasoning about explicit knowledge which overcomes the drawbacks of existing approaches. The idea is to consider the evolution of one's knowledge over time: at one moment an agent may or may not know (explicitly) a certain consequence of his knowledge; however, he can perform some reasoning steps to know it at some moment in the future. I have argued that the traditional approaches fail to capture the concept of actual knowledge correctly because they do not take the cost of inferring new information into account: they assume that whenever an agent knows all premises of a valid inference rule then he automatically knows the conclusion. I will argue that axioms for epistemic logics must have the form: ``if the agent knows all premises of a valid inference rule, and if he performs the correct inference step, then he will know the conclusion''. In section 4.1 I shall discuss the main intuitions of my approach. Then, in section 4.2 formal systems will be defined and discussed.



Subsections
next up previous contents
Next: The dynamics of knowledge Up: diss Previous: Logical omniscience vs. logical   Contents
2001-04-05