Let us discuss some potential candidates for dynamic-epistemic axioms. We shall examine the common modal axioms and see if their dynamic-epistemic counterparts are suitable for formalizing explicit knowledge.
As I have argued in the previous section, if can be derived from the premises by means of the inference rules , then the correct corresponding epistemic axiom should be . Translated into the language , keeping in mind the intuitive reading of the operator , the axiom becomes . In particular, the following formulae could be assumed as axioms for explicit knowledge:
I have also argued that some persistence postulates must be assumed in order to guarantee that all premises, once known by an agent, are still available after the agent performs a reasoning step. The idea that known sentences remain known after any course of thought of an agent can be expressed through the axiom , provided that is persistent.
According to their intuitive interpretation, the dual operators and must satisfy at least the postulates of K4, the minimal temporal logic of transitive time (see also appendix A), i.e., the (temporal) necessitation rule (from to infer ) and the following two axioms should be assumed:
Can stronger principles be imposed on the operator ? Linearity does not seem reasonable: courses of thought can go to several different directions. There are certainly many ways to extend one's knowledge, e.g., by applying two different inference rules. We can imagine that an agent currently has a certain information state where two sentences and are implicitly available. After some course of thought he knows explicitly, and after some other course of thought he knows . In this way two different new information states and are possible: in one state (say ) the formula is (explicitly) known, but not , and in the other one is known, but not .
Consider now the information state where has been established. If the necessary conditions to establish are still available as in the original state , then the sequence of reasoning steps leading to could be started at , leading to a new, more complete information state where both and are known explicitly. The agent could also arrive at by starting the process of deriving from . Thus, the principle of directedness seems attractive: any two developments originating from the same point will eventually be merged again. This principle corresponds to the axiom . In modal logic, this formula is known as schema (G).
To distinguish genuine knowledge from belief, the axiom can be assumed. This Truth axiom seems unproblematic in dynamic-epistemic settings. As to consistency, two variants of the Consistency axiom are possible. The first is , which says that does not believe obvious contradictions. In normal modal logics, that formula is equivalent to the formulae . However, this needs not be true in the context of dynamic-epistemic logic, because agents may believe two sentences without believing their conjunction. These two consistency axioms seem to be acceptable rationality postulates.
Let us now examine how the ability of the agents to introspect their knowledge can be captured within our dynamic framework. An agent's action of introspection can be considered one of his basic reasoning actions4.3. Thus, we may view agent 's introspection action as one part of his abstract action . Consider positive introspection first. Suppose that knows . Can we infer that he will know after introspecting his knowledge that he knows ? Not necessarily! We can assume that will know that he previously knows , but to support the inference that after his introspection action the agent knows that he knows we need one more argument, namely that 's knowledge of will not be changed through his reasoning actions. We have argued previously that such a persistence axiom is reasonable for a subclass of formulae. Thus, we have the following axiom of positive introspection, which corresponds to the schema (4) in modal epistemic logic: , provided that is persistent.
The same argumentation can be used to show that the candidate for the negative introspection axiom is not acceptable. It can happen that after a reasoning step the agent knows something what he did not know previously.