Logical Consequence

The aim of these notes is to achieve a basic understanding of the concepts of mathematical logic. The process of logical deduction is clearly a central theme of mathematics, where the idea is to prove a stated result by the means of an argument which is broken down into small steps, each of which should be obviously valid to the intended audience. This is typically done in a slightly informal fashion, where the validity of each stage of the proof is supposed to be clear, but does not necessary follow a fixed and clearly stated framework. The study of logic is meant to clarify how this process works, and give a clearly defined framework for logical reasoning. While the history of logic goes back thousands of years, a solid mathematical foundation was only developed in the late nineteenth and early twentieth centuries. For example, the Principia Mathematica published in 1910 by Whitehead and Russell was an attempt to show that all of mathematics can be derived from some precisely stated set of axioms together with inference rules for obtaining conclusions from lists of premises. From this standpoint, logic becomes another field of mathematical study. Just as rings, fields, measure spaces, etc., are objects of mathematical study, so is logic. As in any of these other areas, we use standard mathematical reasoning to prove results about logical systems. However, these results can then shed light on the process of mathematical reasoning itself, as well as providing justification for the underlying frameworks used for much of mathematics.

This has been a very successful endeavor, with theories such as Zermelo-Fraenkel set theory (ZF) providing the standard set of axioms used by mathematicians in many different fields of study. Many important ideas and results have arisen from the study of logic, such as the relative consistency of different theories and independence of certain statements. The axiom of choice, for example, is often considered to be intuitively obvious but, at other times, has been considered controversial. Thanks to the mathematical study of logic, it is now known to be independent of the other axioms of ZFC. Similarly, the continuum hypothesis which Georg Cantor spent many years trying to prove, has also been shown to be independent of ZFC. It is also known that, although the axiom of choice can be used to construct sets which are not Lebesgue measurable, it is consistent with the axiom of dependent choice that all sets of real numbers are measurable. In the other direction, reverse mathematics has been successful in determining precisely which axioms are really required for various mathematical theorems. At a higher level, results such as the completeness theorem have put the theory on a solid footing, establishing the equivalence of semantic truth and syntactic provabilty in first order logics. The incompleteness theorem, on the other hand, shows that no recursively enumerable proof system can prove all true statements about the arithmetic of the natural numbers and, furthermore, no sufficiently strong proof system is able to prove its own consistency (unless it is actually inconsistent). For example, it is not possible to prove the consistency of ZFC just by using the axioms and rules of ZFC itself, although it is possible in the presence of additional large cardinal axioms. Similarly, Peano arithmetic is not able to prove its own consistency, but it is possible if the well-ordering property of the ordinal {\epsilon_0} is added.

The mathematical study of logic has also made clear the distinction between classical and intuitionistic or constructive logics where the law of the excluded middle does not hold. Another consequence of putting logic on a solid mathematical foundation is that it should be possible to check the validity of mathematical proofs in an entirely systematic way and could, in theory, be checked by computer. There are various proof-checkers available, although they are currently much more difficult and tedious to use than writing out proofs for human readers, so tend not to be used for most mathematics. There is also a large intersection between mathematical logic and computer science. For example, the Curry-Howard correspondence gives a one-to-one relation between statements of intuitionistic implicational logic and the types of valid programs in simply typed lambda calculus, with the programs or lambda expressions playing the part of proofs. This correspondence has been used as the basis for various computer implementations of proof systems.

The starting point for most logical theories is a language in which statements can be formed according to rules for what constitutes a valid statement (wffs, or well-formed formulas). This generally includes certain special logical connectives which allow statements to be formed which express a logical connection between its component parts. For example, if `P’ and `Q’ are valid sentences, then `{P\rightarrow Q}‘ is also valid (meaning, `if P then Q’). These logical connectives come with prescribed rules of inference and, together with a fixed list of axioms, we can prove theorems. This does, however, raise various questions. What is the `correct’ set of connectives and what is the correct set of rules of inference that they should follow. It is possible that, with different connectives or rules of inference, an entirely separate set of theorems would result. In this post, I take a step back from such specific theories or rules. The idea is to first look at the most general concept of what a logic is, and only after that, can we can determine exactly what connectives or rules of inference are possible or desirable.

Consequence Relations

Possibly the most basic concept in logic is that of entailment or inference. We start with a collection of premises, which are statements in some language and, according to some rules, we establish a result. The starting point is a set {L}, which can be thought of as a set of well-formed statements in some language although, to be as general as possible, we just assume that {L} is a set with no specific restriction or interpretation assumed of its elements. We write

\displaystyle  a_1,a_2,a_3,\ldots,a_n\vdash b, (1)

for {a_i,b\in L}, to mean that {a_1,\ldots,a_n} entails {b} or, equivalently, that {b} is a logical consequence of {a_1,a_2\ldots,a_n}. Quite what the relation {\vdash} really means is left open at this stage. For example, considering a collection of interpretations or models, each of which assigns truth values to the elements of {L}, (1) can be taken to mean that any model assigning the truth value 1 to the {a_i} also assigns the value 1 to {b}. Alternatively, (1) could mean that, with respect to some formal proof system, there exists a proof of {b} from the premises {a_i}. More generally,

\displaystyle  \Gamma\vdash a (2)

means that {a\in L} is a logical consequence of the set of premises {\Gamma\subseteq L}. For convenience, we often put a list of subsets or elements of {L} as the list of premises, so that

\displaystyle  \Gamma_1,\Gamma_2,\ldots,\Gamma_m,a_1,a_2,\ldots,a_n\vdash b,

for {\Gamma_i\subseteq L} and {a_i,b\in L}, is just another way of writing

\displaystyle  \Gamma_1\cup\Gamma_2\cup\ldots\cup\Gamma_m\cup\{a_1,a_2,\ldots,a_n\}\vdash b.

Similarly, the statement {\vdash a} with no premises is just another way of writing {\emptyset\vdash a}.

As a matter of terminology, expression (2) is a sequent, and the {\vdash} symbol is referred to as a turnstile. The left hand side, {\Gamma}, is the set of premises or the antecedent and {b} is the consequent, and we say that {\Gamma} entails {a} or that {a} is a consequence of {\Gamma}. Although we are being rather general here and not considering any particular interpretation of the set {L} or of the relation {\vdash}, there is a short list of `obvious’ properties which should hold for logical inference.

Definition 1 A consequence relation {\vdash} on {L} is a relation on {\mathcal{P}L\times L} satisfying

  1. {\Gamma\!,a\vdash a} (reflexivity/axiom of identity),
  2. if {\Gamma\vdash a} then {\Gamma\!,\Delta\vdash a} (weakening),
  3. if {\Gamma\vdash x} for all {x\in\Delta} and {\Gamma\!,\Delta\vdash a} then {\Gamma\vdash a} (transitivity/rule of cut),

for all {\Gamma,\Delta\subseteq L} and {a\in L}. We will say that {\vdash} is finitary if, whenever {\Gamma\vdash a} then {\Delta\vdash a} for some finite {\Delta\subseteq\Gamma}.

A pair {(L,\vdash)} consisting of a set {L} together with a consequence relation {\vdash} on {L} will be known as a logic. We will often just use {L} to denote the logic {(L,\vdash)}.

The notion of consequence relations goes back to Tarski (Logic, Semantics, Metamathematics). For logical consequence to be finitary means that the proof of a statement only ever requires invoking finitely many premises, which is clearly a desirable property if we are wanting to have a computable theory, where the truth of consequences could potentially be verified by a computer or expressed on paper. Note that any consequence relation {\vdash}, whether or not it is finitary, will uniquely determine a finitary consequence relation by {\Gamma\vdash_f a} iff {\Delta\vdash a} for some finite {\Delta\subseteq\Gamma}.

It should be noted that there are more general logics than those given by definition 1. We could drop any, or all, of the defining properties. For example, non-monotonic logics do not satisfy the second condition. Furthermore, it is possible to generalise the antecedent to be a multiset where the number of times that each element occurs is important, or even a sequence where the number of occurences and the order of the premises matters. I will not, however, be concerned with such logics, with those satisfying the definition above including everything which I intend to consider.

There will generally be many different consequence relations on a set {L}. A relation {R} on {\mathcal P L\times L} can alternatively be viewed as a subset of {\mathcal P L\times L} consisting of all pairs {(\Gamma,a)} for which {\Gamma Ra}. These can be ordered by inclusion {\subseteq}, inducing an ordering on relations {R_1\le R_2} iff {\Gamma R_1a} implies {\Gamma R_2a}. We will also say that {R_1} is weaker than {R_2} or, equivalently, that {R_2} is stronger. In particular, this induces an ordering on the consequence relations. The following are examples of consequence relations.

  1. The minimal consequence relation on any set {L}, given by {\Gamma\vdash a} if and only if {a\in\Gamma}.
  2. The maximal consequence relation on any set {L}, given by {\Gamma\vdash a} for all {\Gamma\subseteq L} and {a\in L}.
  3. The Boolean domain containing two values, thought of as `false’ and `true’. These are variously denoted as {\{0,1\}}, {\{\bot,\top\}}, {\{F,T\}} or simply {\{{\rm False},{\rm True}\}}. The consequence relation on {B=\{F,T\}} is defined by {\Gamma\vdash a} if either {a=T} or {a\in\Gamma}. It can be seen that this is the minimal consequence relation satisfying {\vdash T}.
  4. Any map (truth valuation) {v} from {L} to the Boolean logic {B} defines a consequence relation on {L} given by {\Gamma\vDash_v a} iff {v(a)=T} whenever {v(x)=T} for all {x\in\Gamma}. Equivalently, {v(\Gamma)\vdash v(a)}. This corresponds to the rather idealised situation where {L} is a set of logical statements, each of which is either true or false. Assuming that {\Gamma} is a set of true statements, then {\Gamma\vDash_v a} iff {a} is true.
  5. A set {\mathcal M} of models or interpretations on {L} defines, for each {m\in\mathcal M}, a truth valuation {v_m\colon L\rightarrow B}. We define a consequence relation on {L} by {\Gamma\vDash a} iff, for each {m\in\mathcal M} satisfying {v_m(x)=T} for all {x\in\Gamma}, we have {v_m(a)=T}. Equivalently, {\Gamma\vDash_{v_m}a} for all {m\in\mathcal M}. This is known as semantic consequence, and has an intuitive interpretation as follows. Consider {\mathcal M} as a collection of universes, within which, every statement in {L} has a definite truth value. A set {\Gamma} of premises restricts us to being in {\mathcal M_\Gamma}, the set of universes in which all statements in {\Gamma} are true. Writing {\Gamma\vDash a} signifies that {a} is true for all universes in {\mathcal M_\Gamma}.

    This interpretation leads us to consider the concepts of consistency and explosion. A set {\Gamma\subseteq L} is consistent if there exists at least one model {m\in\mathcal M} in which all elements of {\Gamma} are true and, otherwise, is inconsistent. From an inconsistent set of premises {\Gamma}, everything follows, so {\Gamma\vDash a} for all {a\in L}. This is the principle of explosion (“from falsehood, everything”). On the other hand, for a reasonably expressive logic system, every model will make some statements false. For example, if {L} contains a statement `P’ and another statement `not P’, then these should not both be true in any model and, hence, if {\Gamma} is consistent then it will not entail everything in {L}. This suggests the concept of consistent and inconsistent sets of premises in a general logic: {\Gamma} is inconsistent if {\Gamma\vdash a} for all {a\in L} and, otherwise, is consistent. The logic itself is inconsistent if {\vdash a} for all {a\in L} and, otherwise, is consistent.

  6. We can define a consequence relation on any collection {L} of subsets of a set {X}. Say {\Gamma\vdash a} if {\bigcap\Gamma\subseteq a}. For example, {L} could be the power set {\mathcal P X}, or it could be the collection of open (resp., closed, compact) subsets of a topological space.

    This consequence relation can also be defined by the collection {\{v_x\colon x\in X\}} of truth valuations given by {v_x(a)=T} if {x\in a} and {v_x(a)=F} if {x\not\in a}.

  7. Any collection of consequence relations {\vdash_i} ({i\in I}) on a set {L} has a unique infimum given by {\Gamma\vdash a} iff {\Gamma\vdash_i a} for all {i\in I}.
  8. For any relation {R} on {\mathcal P L\times L}, there will be a unique minimal consequence relation such that {\Gamma\vdash a} whenever {\Gamma Ra}. This is the consequence relation generated by {R}.
  9. For any logic {(L,\vdash)} and set of premises {\Gamma\subseteq L}, there exists a minimal consequence relation, {\vdash_\Gamma} on {L} satisfying {\Delta\vdash_\Gamma a} whenever {\Delta\vdash a}, and, {\vdash_\Gamma x} for all {x\in\Gamma}. This has the rather simple construction as {\Delta\vdash_\Gamma a} iff {\Gamma\!,\Delta\vdash a}. Such constructions are common. For example, {L} may consist of statements built from some specified list of symbols combined with logical connectives. The base consequence relation {\vdash} would then satisfy the deduction rules associated with those connectives, and we add a set {\Gamma} of axioms specific to the theory at hand to obtain the required consequence relation.
  10. Let {(L,\le)} be a partially ordered set, and write {\Gamma\vdash a} iff {\gamma\le a} for some {\gamma\in\Gamma}. Then, {\vdash} is a consequence relation.
  11. Syntactic consequence: Given rules for constructing well-formed formulas (wffs) in a given language, and rules of inference to generate new wffs from old ones, {\Gamma\vdash a} can be taken to denote that the wff {a} can be deduced from those in {\Gamma} by a sequence of applications of the deduction rules.
  12. For a strongly typed programming language (e.g., simply typed lambda calculus), we can let {L} be the collection of types and {\Gamma\vdash a} means that there exists a function defined in the language with return type {a} and whose arguments are of types in {\Gamma}.

Finitary consequence relations are uniquely determined by their restriction to finite sets of antecedents. That is, in definition 1, we need only consider {\vdash} to be a relation on {\mathcal P_fL\times L}, where {\mathcal P_fL} denotes the collection of finite subsets of {L}. Then, the sets {\Gamma,\Delta\subseteq L} in the definition should be taken to be finite. Any such relation {\vdash} uniquely extends to a finitary consequence relation on {\mathcal PL\times L} by {\Gamma\vdash a} iff {\Delta\vdash a} for some finite {\Delta\subseteq\Gamma}. For finitary consequence relations, the statements of definition 1 can be significantly weakened.

Theorem 2 A relation on {\mathcal P_f L\times L} is a consequence relation iff,

  1. {a\vdash a},
  2. if {\Gamma\vdash a} then {\Gamma\!,b\vdash a},
  3. if {\Gamma\vdash b} and {\Gamma\!,b\vdash a} then {\Gamma\vdash a},

for all finite {\Gamma\subseteq L} and {a,b\in L}.

Proof: Note that the first statement is the axiom of identity for the special case {\Gamma=\emptyset}, the second statement is weakening for the case {\Delta=\{b\}}, and the third statement is transitivity for the case {\Delta=\{b\}}. So, consequence relations satisfy the properties above. It remains to show that if {\vdash} satisfies the properties of the theorem, then it is a consequence relation.

If {\Gamma,\Delta\in\mathcal P_fL} and {\Gamma\vdash a} then, to prove weakening, it needs to be shown that {\Gamma\!,\Delta\vdash a}. We will use induction on the size of {\Delta}. If {\Delta} is empty, then the statement is trivial. Otherwise, we can write {\Delta=\Theta\cup\{b\}} for {\Theta} strictly included in {\Delta}. By the induction hypothesis, {\Gamma\!,\Theta\vdash a}. The second statement of the theorem says that {\Gamma\!,\Theta,b\vdash a} or, equivalently, {\Gamma\!,\Delta\vdash a} as required. The axiom of identity, {\Gamma\!,a\vdash a} follows from the first statement of the theorem together with weakening.

For transitivity, suppose that {\Gamma\vdash x} for all {x\in\Delta} and that {\Gamma\!,\Delta\vdash a}. We need to show that {\Gamma\vdash a}. Again, induction on the size of {\Delta} will be used. If {\Delta} is empty then the statement is trivial. Suppose that {\Delta=\Theta\cup\{b\}} for {\Theta} strictly smaller than {\Delta}. Then, we have {\Gamma\!,b\vdash x} for all {x\in\Theta} and {\Gamma\!,b,\Theta\vdash a}. So, by the induction hypothesis, {\Gamma\!,b\vdash a} and {\Gamma\vdash b}. By the third statement of the theorem, this gives {\Gamma\vdash a} as required. ⬜

An alternative approach to logic is via closure operators. The logical closure of {\Gamma\subseteq L} is the set of all {a\in L} for which {\Gamma\vdash a}, and can be shown to satisfy the axioms of a closure operator. Similarly, {\Gamma} is said to be logically closed if every {a\in L} satisfying {\Gamma\vdash a} is itself in {\Gamma}. That is, {\Gamma} is logically closed if it equals its own logical closure.

Definition 3 A closure operator is a map {C\colon\mathcal{P}L\rightarrow\mathcal{P}L} satisfying

  1. {\Gamma\subseteq C(\Gamma)}.
  2. if {\Gamma\subseteq\Delta} then {C(\Gamma)\subseteq C(\Delta)}.
  3. {C(C(\Gamma))=C(\Gamma)}.

We say that {C} is finitary iff

\displaystyle  C(\Gamma)=\bigcup\left\{C(\Delta)\colon\Delta\subseteq\Gamma{\rm\ is\ finite}\right\}. (3)

Note that if {C} is finitary then the second condition in the definition of a closure operator can be removed, as it already follows from (3). Specifying a closure operator on a set is equivalent to specifying a consequence relation.

Theorem 4 Given a set {L}, there is a one-to-one relation between each of the following

  1. a set {\mathcal C\subseteq\mathcal{P}L} which is closed under intersections and such that {L\in\mathcal C}.
  2. a closure operator {C\colon\mathcal{P}L\rightarrow\mathcal{P}L}.
  3. a consequence relation {\vdash} on {\mathcal{P}L\times L}.

These are related by

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle C(\Gamma)&\displaystyle=\bigcap\left\{\Delta\in\mathcal C\colon\Delta\supseteq\Gamma\right\}\smallskip\\ &\displaystyle=\left\{a\in L\colon \Gamma\vdash a\right\},\smallskip\\ \displaystyle\mathcal C&\displaystyle=\left\{\Gamma\subseteq L\colon C(\Gamma)=\Gamma\right\}\smallskip\\ &\displaystyle=\left\{C(\Gamma)\colon\Gamma\subseteq L\right\}. \end{array} (4)

and {\vdash} is finitary iff {C} is finitary.

Proof: If {\vdash} is a consequence relation, define the map {C\colon\mathcal PL\rightarrow\mathcal PL} by the second equality of (4). We need to show that this satisfies each the three properties of definition 3 of a closure operator. The first property, {\Gamma\subseteq C(\Gamma)} follows immediately from the axiom of identity and the second from weakening. For the third, if {a\in C(C(\Gamma))} then, setting {\Delta=C(\Gamma)}, we have {\Gamma\vdash x} for all {x\in\Delta} and {\Delta\vdash a} so, by transitivity, {\Gamma\vdash a}, showing that {a\in C(\Gamma)} and, hence, {C(C(\Gamma))=\Gamma}. Conversely, if {C} is a closure operator then define {\Gamma\vdash a} if {a\in C(\Gamma)}. The axiom of identity follows from {\Gamma\subseteq C(\Gamma)} and weakening follows from {C(\Delta)\subseteq C(\Gamma)} for {\Delta\subseteq\Gamma}. For transitivity, suppose that {\Gamma\vdash x} for all {x\in\Delta} and {\Gamma\!,\Delta\vdash a}. Then, {\Delta\subseteq C(\Gamma)} and

\displaystyle  a\in C(\Gamma\cup\Delta)\subseteq C(C(\Gamma))=C(\Gamma)

and, hence, {\Gamma\vdash a}, showing that {\vdash} is a consequence relation.

For a closure operator {C}, define {\mathcal C} to be the set of {\Gamma\subseteq L} with {C(\Gamma)=\Gamma}, so that the third equality of (4) holds. By definition, every {\Gamma\in\mathcal C} is of the form {C(\Gamma)}. Also, as {C(C(\Gamma))=C(\Gamma)}, {\mathcal C} can also be expressed as the sets {C(\Gamma)} for {\Gamma\subseteq L}, showing that the final equality of (4) also holds. We have {L=C(L)\in\mathcal C}. If {\Gamma_i\in C} over {i\in I}, then setting {\Gamma=\bigcap_i\Gamma_i},

\displaystyle  \Gamma\subseteq C(\Gamma)\subseteq C(\Gamma_i)=\Gamma_i.

Taking the intersection over {i\in I} shows that {\Gamma=C(\Gamma)\in\mathcal C} and, hence, {\mathcal C} is closed under intersections. Conversely, suppose that {\mathcal C} is any collection of subsets of {L} and define {C\colon\mathcal PL\rightarrow\mathcal PL} by the first equality of (4). It is clear that this satisfies the first two conditions of definition 3. Next, for every {\Gamma\subseteq\Delta} with {\Delta\in\mathcal C} we have {C(\Gamma)\subseteq\Delta} by definition so,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle C(C(\Gamma))&\displaystyle=\bigcap\left\{\Delta\in\mathcal C\colon\Delta\supseteq C(\Gamma)\right\}\smallskip\\ &\displaystyle=\bigcap\left\{\Delta\in\mathcal C\colon\Delta\supseteq\Gamma\right\}\smallskip\\ &\displaystyle=C(\Gamma) \end{array}

So, {C} is a closure operator. We have not even used the conditions that {L\in\mathcal C} or that {\mathcal C} is closed under intersections yet. This is only required to verify the final identity of (4). It is clear that every {\Gamma\in\mathcal C} is the intersection of all {\Delta\in\mathcal C} containing {\Gamma}, so that {\Gamma=C(\Gamma)}. Conversely, it remains to show that {C(\Gamma)\in\mathcal C} for all {\Gamma\subseteq L}. As {L\in\mathcal C}, there does exist at least one {\Delta\in\mathcal C} containing {\Gamma} and, as {\mathcal C} is closed under intersections, {C(\Gamma)} is then in {\mathcal C}.

Finally, suppose that {\vdash} is finitary. Then, for any {a\in C(\Gamma)}, we have {\Gamma\vdash a} and, hence, {\Delta\vdash a} for a finite {\Delta\subseteq\Gamma}. This shows that {a\in C(\Delta)} and, hence (3) holds and {C} is finitary. Conversely, if {C} is finitary and {\Gamma\vdash a} then {a\in C(\Gamma)} and, by (3), {a\in C(\Delta)} for a finite {\Delta\subseteq\Gamma}. So, {\Delta\vdash a}, showing that {\vdash} is finitary. ⬜

As noted above, any relation {R} on {\mathcal PL\times L} generates a consequence operator. Furthermore, if {R} is finitary, then {\vdash} is also finitary, and an entailment {\Gamma\vdash a} can be established by a finite sequence of applications of {R}, as stated below. In particular, if {R} is recursively enumerable, then so are the consequences {\Gamma\vdash a}, and proofs of these entailments can be checked by following a recursively enumerable set of rules, so could potentially be verified by computer.

Lemma 5 Let {\vdash} be the consequence operator generated by a relation {R} on {\mathcal P_fL\times L}. Then {\Gamma\vdash a} iff {a\in\Gamma} or there exists {a_1,a_2,\ldots,a_n\in L} with

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle S_1Ra_1{\ \rm for\ some\ }S_1\subseteq\Gamma,\\ &\displaystyle S_2Ra_2{\rm\ for\ some\ } S_2\subseteq\Gamma\cup\{a_1\},\\ &\ \ \vdots\\ &\displaystyle S_kRa_k{\rm\ for\ some\ } S_k\subseteq\Gamma\cup\{a_1,\ldots,a_{k-1}\},\\ &\ \ \vdots\\ &\displaystyle S_nRa_n{\rm\ for\ some\ }S_n\subseteq\Gamma\cup\{a_1,a_2,\ldots,a_{n-1}\}. \end{array}

and {a_n=a}.

Furthermore, {\vdash} is finitary.

Proof: We show that {\Gamma\vdash a_n} by induction on n. First, if {n=1} then {S_1\vdash a_1} and, by weakening, {\Gamma\vdash a_1}. Next, for {n > 1}, we have {\Gamma\vdash a_k} for each {k < n} by the induction hypothesis and, hence, {\Gamma\vdash x} for all {x\in S_n}. As {S_n\vdash a_n}, transitivity gives {\Gamma\vdash a_n}.

Conversely, it needs to be shown that every {a\in L} with {\Gamma\vdash a} can be expressed as above. Let us start by defining {\Gamma\vdash a} to mean that {a} satisfies the conclusion of the theorem. If it can be shown that {\vdash} is a finitary consequence relation then we are done. The axiom of weakening is immediate and, as the construction above only refers to finitely many finite subsets of {\Gamma}, it is finitary. The axiom of identity is also immediate as, by definition, {\Gamma\vdash a} for all {a\in\Gamma}.

By theorem 2, it only remains to show that if {\Gamma\vdash b} and {\Gamma\!,b\vdash a} then {\Gamma\vdash a}. If {b\in\Gamma} or {a\in\Gamma\cup\{b\}} then the result is immediate. So, we only need to consider the case where we have finite sequences {a_k,b_k\in L}, finite sets {S_k\subseteq\Gamma\cup\{b,a_1,\ldots,a_{k-1}\}} and {T_k\subseteq\Gamma\cup\{b_1,\ldots,b_{k-1}\}} satisfying {S_kRa_k} and {T_kRb_k}, and {b_n=b}, {a_m=a}. In that case, set {\tilde a_k=b_k} and {\tilde S_k=T_k} for {k\le n}. Similarly, set {\tilde a_k=a_{k-n}} and {\tilde S_k=S_{k-n}} for {n < k\le m+n}. We then have {\tilde S_k\subseteq\Gamma\cup\{\tilde a_1,\ldots,\tilde a_{k-1}\}}, {\tilde S_k R\tilde a_k} and {a=\tilde a_{m+n}}, so {\Gamma\vdash a} as required. ⬜

A consequence relation {\vdash} defines a preorder on {L}, given by {a\vdash b}. This can be expressed in either of the following ways.

Lemma 6 For any {a,b\in L}, the following are equivalent.

  1. {a\vdash b}.
  2. {\Gamma\vdash b} for all {\Gamma\subseteq L} satisfying {\Gamma\vdash a}.
  3. {\Gamma\!,a\vdash c} for all {\Gamma\subseteq L} and {c\in L} satisfying {\Gamma\!,b\vdash c}.

Proof: If the first statement holds and {\Gamma\vdash a} then {\Gamma\vdash b} by transitivity.

Suppose that the second statement holds and {\Gamma\!,b\vdash c}. As {\Gamma\!,a\vdash a}, the second statement gives {\Gamma\!,a\vdash b}. Transitivity gives {\Gamma\!,a\vdash c}.

Finally, suppose that the third statement holds. Using {c=b}, we have {\Gamma\!,b\vdash b} and, hence, {\Gamma\!,a\vdash b}. Putting {\Gamma=\emptyset} gives the first statement. ⬜

Consequently, {\vdash} defines an equivalence relation on {L} given by {a\vdash b} and {b\vdash a}. We say that {a} and {b} are logically equivalent.

Corollary 7 For any {a,b\in L}, the following are equivalent.

  1. {a\vdash b} and {b\vdash a}.
  2. {\Gamma\vdash a} iff {\Gamma\vdash b}, for all {\Gamma\subseteq L}.
  3. {\Gamma\!,a\vdash c} iff {\Gamma\!,b\vdash c}, for all {\Gamma\subseteq L} and {c\in L}.

If any (or all) of these hold, then we say that {a,b} are logically equivalent and write {a\cong b}.

The following is just a statement of the fact that a consequence relation is not able to distinguish logically equivalent statements.

Lemma 8 Let {\Gamma,\Gamma^\prime\subseteq L} and {a,a^\prime\in L}. Suppose that {a\cong a^\prime} and for every {x\in\Gamma} there is an {x^\prime\in\Gamma^\prime} with {x\cong x^\prime}.

If {\Gamma\vdash a} then {\Gamma^\prime\vdash a^\prime}.

Proof: For every {x\in\Gamma}, we have {x\cong x^\prime} for some {x^\prime\in\Gamma^\prime}. As {\Gamma^\prime\vdash x^\prime}, this implies that {\Gamma^\prime\vdash x}. By transitivity, {\Gamma^\prime\vdash a}. So, {\Gamma^\prime\vdash a^\prime}. ⬜

Note that for any logic {(L,\vdash)} we can define its set {L/\cong} of equivalence classes, together with a natural function {\eta_L\colon L\rightarrow L/\cong} taking each {x\in L} to its equivalence class. We can then define a consequence relation on {L/\cong} by {\theta(\Gamma)\vdash\theta(a)} iff {\Gamma\vdash a} for {\Gamma\subseteq L} and {a\in L}.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s