At the beginning of epistemic logic, attempts were made to develop systems to describe actual knowledge of real agents. The term ``knowledge'' was originally used in its ordinary language meaning: to say that an agent knows a sentence either means that he consciously assents to it, or that he immediately sees it to be true when the question is presented. However, it was soon realized that describing actual knowledge is a nearly impossible task: actual knowledge does not seem to obey any logic. If we consider real agents and ask what they actually know, we can check empirically that an agent's knowledge is often not closed under any logical law. From some epistemic statement one cannot infer reliably any other epistemic statement, i.e., one can hardly find any genuine epistemic statement that may claim universal validity. There seems to be no general epistemic principle that cannot be disproved with a counter-example. It seems impossible to develop a logic of actual knowledge because -- to quote Eberle ([Ebe74]) -- such a logic must be able to ``provide for total ignoramusses (ones who knows nothing), complete idiots (ones who cannot draw even the most elementary inferences), and ultimate fools (ones who believe nothing but contradictions)''.
In order to make epistemic logic possible, idealizations were made concerning the reasoning capacities of the agents, and modal systems were proposed to describe such idealized agents. However, the idealizations made by modal epistemic logic are too strong for any realistic agent: they require that agents be very powerful reasoners who know all logical consequences of what they know, including all logical truths. If ``knowledge'' is interpreted in its normal, ordinary language meaning then such perfectly rational, logically omniscient agents are non-existent. No human agent has the reasoning capacities required by modal epistemic logic. We cannot build artificial agents that possess the reasoning power described by normal modal systems. Thus, modal epistemic logic cannot be interpreted as describing what agents actually know.
To save modal logic as logic of knowledge, a new interpretation of epistemic logic has been proposed: the concept of implicit knowledge is invented, and modal epistemic logic is now interpreted as describing this concept. That is, epistemic logic is not taken as describing what an agent actually knows, but only what is implicitly represented in his information state, i.e., what logically follows from his actual knowledge. What an agent actually knows is called his explicit knowledge.
In the following I review briefly the modal approach to epistemic logic. (An overview of basic modal logic is contained in appendix A.) I shall argue that that approach cannot serve as an adequate foundation for agent theories, because modal epistemic logic cannot account for the concept of explicit knowledge, but only explicit knowledge can constitute a cognitive state which can play a certain justificatory role for agents' action.