# Logical Consequence

The aim of these notes is to achieve a basic understanding of the concepts of mathematical logic. The process of logical deduction is clearly a central theme of mathematics, where the idea is to prove a stated result by the means of an argument which is broken down into small steps, each of which should be obviously valid to the intended audience. This is typically done in a slightly informal fashion, where the validity of each stage of the proof is supposed to be clear, but does not necessary follow a fixed and clearly stated framework. The study of logic is meant to clarify how this process works, and give a clearly defined framework for logical reasoning. While the history of logic goes back thousands of years, a solid mathematical foundation was only developed in the late nineteenth and early twentieth centuries. For example, the Principia Mathematica published in 1910 by Whitehead and Russell was an attempt to show that all of mathematics can be derived from some precisely stated set of axioms together with inference rules for obtaining conclusions from lists of premises. From this standpoint, logic becomes another field of mathematical study. Just as rings, fields, measure spaces, etc., are objects of mathematical study, so is logic. As in any of these other areas, we use standard mathematical reasoning to prove results about logical systems. However, these results can then shed light on the process of mathematical reasoning itself, as well as providing justification for the underlying frameworks used for much of mathematics.

This has been a very successful endeavor, with theories such as Zermelo-Fraenkel set theory (ZF) providing the standard set of axioms used by mathematicians in many different fields of study. Many important ideas and results have arisen from the study of logic, such as the relative consistency of different theories and independence of certain statements. The axiom of choice, for example, is often considered to be intuitively obvious but, at other times, has been considered controversial. Thanks to the mathematical study of logic, it is now known to be independent of the other axioms of ZFC. Similarly, the continuum hypothesis which Georg Cantor spent many years trying to prove, has also been shown to be independent of ZFC. It is also known that, although the axiom of choice can be used to construct sets which are not Lebesgue measurable, it is consistent with the axiom of dependent choice that all sets of real numbers are measurable. In the other direction, reverse mathematics has been successful in determining precisely which axioms are really required for various mathematical theorems. At a higher level, results such as the completeness theorem have put the theory on a solid footing, establishing the equivalence of semantic truth and syntactic provabilty in first order logics. The incompleteness theorem, on the other hand, shows that no recursively enumerable proof system can prove all true statements about the arithmetic of the natural numbers and, furthermore, no sufficiently strong proof system is able to prove its own consistency (unless it is actually inconsistent). For example, it is not possible to prove the consistency of ZFC just by using the axioms and rules of ZFC itself, although it is possible in the presence of additional large cardinal axioms. Similarly, Peano arithmetic is not able to prove its own consistency, but it is possible if the well-ordering property of the ordinal ${\epsilon_0}$ is added.

The mathematical study of logic has also made clear the distinction between classical and intuitionistic or constructive logics where the law of the excluded middle does not hold. Another consequence of putting logic on a solid mathematical foundation is that it should be possible to check the validity of mathematical proofs in an entirely systematic way and could, in theory, be checked by computer. There are various proof-checkers available, although they are currently much more difficult and tedious to use than writing out proofs for human readers, so tend not to be used for most mathematics. There is also a large intersection between mathematical logic and computer science. For example, the Curry-Howard correspondence gives a one-to-one relation between statements of intuitionistic implicational logic and the types of valid programs in simply typed lambda calculus, with the programs or lambda expressions playing the part of proofs. This correspondence has been used as the basis for various computer implementations of proof systems.

The starting point for most logical theories is a language in which statements can be formed according to rules for what constitutes a valid statement (wffs, or well-formed formulas). This generally includes certain special logical connectives which allow statements to be formed which express a logical connection between its component parts. For example, if P’ and Q’ are valid sentences, then ${P\rightarrow Q}$‘ is also valid (meaning, if P then Q’). These logical connectives come with prescribed rules of inference and, together with a fixed list of axioms, we can prove theorems. This does, however, raise various questions. What is the correct’ set of connectives and what is the correct set of rules of inference that they should follow. It is possible that, with different connectives or rules of inference, an entirely separate set of theorems would result. In this post, I take a step back from such specific theories or rules. The idea is to first look at the most general concept of what a logic is, and only after that, can we can determine exactly what connectives or rules of inference are possible or desirable.

#### Consequence Relations

Possibly the most basic concept in logic is that of entailment or inference. We start with a collection of premises, which are statements in some language and, according to some rules, we establish a result. The starting point is a set ${L}$, which can be thought of as a set of well-formed statements in some language although, to be as general as possible, we just assume that ${L}$ is a set with no specific restriction or interpretation assumed of its elements. We write

 $\displaystyle a_1,a_2,a_3,\ldots,a_n\vdash b,$ (1)

for ${a_i,b\in L}$, to mean that ${a_1,\ldots,a_n}$ entails ${b}$ or, equivalently, that ${b}$ is a logical consequence of ${a_1,a_2\ldots,a_n}$. Quite what the relation ${\vdash}$ really means is left open at this stage. For example, considering a collection of interpretations or models, each of which assigns truth values to the elements of ${L}$, (1) can be taken to mean that any model assigning the truth value 1 to the ${a_i}$ also assigns the value 1 to ${b}$. Alternatively, (1) could mean that, with respect to some formal proof system, there exists a proof of ${b}$ from the premises ${a_i}$. More generally,

 $\displaystyle \Gamma\vdash a$ (2)

means that ${a\in L}$ is a logical consequence of the set of premises ${\Gamma\subseteq L}$. For convenience, we often put a list of subsets or elements of ${L}$ as the list of premises, so that

 $\displaystyle \Gamma_1,\Gamma_2,\ldots,\Gamma_m,a_1,a_2,\ldots,a_n\vdash b,$

for ${\Gamma_i\subseteq L}$ and ${a_i,b\in L}$, is just another way of writing

 $\displaystyle \Gamma_1\cup\Gamma_2\cup\ldots\cup\Gamma_m\cup\{a_1,a_2,\ldots,a_n\}\vdash b.$

Similarly, the statement ${\vdash a}$ with no premises is just another way of writing ${\emptyset\vdash a}$.

As a matter of terminology, expression (2) is a sequent, and the ${\vdash}$ symbol is referred to as a turnstile. The left hand side, ${\Gamma}$, is the set of premises or the antecedent and ${b}$ is the consequent, and we say that ${\Gamma}$ entails ${a}$ or that ${a}$ is a consequence of ${\Gamma}$. Although we are being rather general here and not considering any particular interpretation of the set ${L}$ or of the relation ${\vdash}$, there is a short list of obvious’ properties which should hold for logical inference.

Definition 1 A consequence relation ${\vdash}$ on ${L}$ is a relation on ${\mathcal{P}L\times L}$ satisfying

1. ${\Gamma\!,a\vdash a}$ (reflexivity/axiom of identity),
2. if ${\Gamma\vdash a}$ then ${\Gamma\!,\Delta\vdash a}$ (weakening),
3. if ${\Gamma\vdash x}$ for all ${x\in\Delta}$ and ${\Gamma\!,\Delta\vdash a}$ then ${\Gamma\vdash a}$ (transitivity/rule of cut),

for all ${\Gamma,\Delta\subseteq L}$ and ${a\in L}$. We will say that ${\vdash}$ is finitary if, whenever ${\Gamma\vdash a}$ then ${\Delta\vdash a}$ for some finite ${\Delta\subseteq\Gamma}$.

A pair ${(L,\vdash)}$ consisting of a set ${L}$ together with a consequence relation ${\vdash}$ on ${L}$ will be known as a logic. We will often just use ${L}$ to denote the logic ${(L,\vdash)}$.