By now, you have seen how to define some elementary notions in dependent type theory. You have also seen that it is possible to import objects that are defined in Lean’s library. In this chapter, we will explain how mathematical propositions and proofs are expressed in the language of dependent type theory, so that you can start proving assertions about the objects and notations that have been defined. The encoding we use here is specific to the standard library; we will discuss proofs in homotopy type theory in a later chapter.
One strategy for proving assertions about objects defined in the language of dependent type theory is to layer an assertion language and a proof language on top of the definition language. But there is no reason to multiply languages in this way: dependent type theory is flexible and expressive, and there is no reason we cannot represent assertions and proofs in the same general framework.
For example, we could introduce a new type, Prop
, to represent
propositions, and constructors to build new propositions from others.
namespace hide
-- BEGIN
constant and : Prop → Prop → Prop
constant or : Prop → Prop → Prop
constant not : Prop → Prop
constant implies : Prop → Prop → Prop
variables p q r : Prop
check and p q -- Prop
check or (and p q) r -- Prop
check implies (and p q) (and q p) -- Prop
-- END
end hide
We could then introduce, for each element p : Prop
, another type
Proof p
, for the type of proofs of p
. An “axiom” would be constant
of such a type.
namespace hide
constant and : Prop → Prop → Prop
constant or : Prop → Prop → Prop
constant not : Prop → Prop
constant implies : Prop → Prop → Prop
-- BEGIN
constant Proof : Prop → Type
constant and_comm : Π p q : Prop, Proof (implies (and p q) (and q p))
variables p q : Prop
check and_comm p q -- Proof (implies (and p q) (and q p))
-- END
end hide
In addition to axioms, however, we would also need rules to build new proofs from old ones. For example, in many proof systems for propositional logic, we have the rule of modus ponens:
From a proof of
implies p q
and a proof ofp
, we obtain a proof ofq
.
We could represent this as follows:
namespace hide
constant implies : Prop → Prop → Prop
constant Proof : Prop → Type
-- BEGIN
constant modus_ponens (p q : Prop) : Proof (implies p q) → Proof p → Proof q
-- END
end hide
Systems of natural deduction for propositional logic also typically rely on the following rule:
Suppose that, assuming
p
as a hypothesis, we have a proof ofq
. Then we can “cancel” the hypothesis and obtain a proof ofimplies p q
.
We could render this as follows:
namespace hide
constant implies : Prop → Prop → Prop
constant Proof : Prop → Type
-- BEGIN
constant implies_intro (p q : Prop) : (Proof p → Proof q) → Proof (implies p q).
-- END
end hide
This approach would provide us with a reasonable way of building
assertions and proofs. Determining that an expression t
is a correct
proof of assertion p
would then simply be a matter of checking that
t
has type Proof p
.
Some simplifications are possible, however. To start with, we can
avoid writing the term Proof
repeatedly by conflating Proof p
with
p
itself. In other words, whenever we have p : Prop
, we can interpret
p
as a type, namely, the type of its proofs. We can then read t :
p
as the assertion that t
is a proof of p
.
Moreover, once we make this identification, the rules for implication
show that we can pass back and forth between implies p q
and p →
q
. In other words, implication between propositions p
and q
corresponds to having a function that takes any element of p
to an
element of q
. As a result, the introduction of the connective
implies
is entirely redundant: we can use the usual function space
constructor p → q
from dependent type theory as our notion of
implication.
This is the approach followed in the Calculus of Inductive
Constructions, and hence in Lean as well. The fact that the rules for
implication in a proof system for natural deduction correspond exactly
to the rules governing abstraction and application for functions is an
instance of the Curry-Howard isomorphism, sometimes known as the
propositions-as-types paradigm. In fact, the type Prop
is
syntactic sugar for Type.{0}
, the very bottom of the type hierarchy
described in the last chapter. Prop
has some special features, but
like the other type universes, it is closed under the arrow
constructor: if we have p q : Prop
, then p → q : Prop
.
There are at least two ways of thinking about propositions as types. To
some who take a constructive view of logic and mathematics, this is a
faithful rendering of what it means to be a proposition: a proposition
p
represents a sort of data type, namely, a specification of the type
of data that constitutes a proof. A proof of p
is then simply
an object t : p
of the right type.
Those not inclined to this ideology can view it, rather, as a simple
coding trick. To each proposition p
we associate a type, which is
empty if p
is false and has a single element, say *
, if p
is
true. In the latter case, let us say that (the type associated with)
p
is inhabited. It just so happens that the rules for function
application and abstraction can conveniently help us keep track of
which elements of Prop are inhabited. So constructing an element
t : p
tells us that p
is indeed true. You can think of the
inhabitant of p
as being the “fact that p
is true.” A proof of p
→ q
uses “the fact that p
is true” to obtain “the fact that q
is
true.”
Indeed, if p : Prop
is any proposition, Lean’s standard kernel
treats any two elements t1 t2 : p
as being definitionally equal,
much the same way as it treats (λ x, t)s
and t[s/x]
as
definitionally equal. This is known as “proof irrelevance,” and is
consistent with the interpretation in the last paragraph. It means
that even though we can treat proofs t : p
as ordinary objects in
the language of dependent type theory, they carry no information
beyond the fact that p
is true.
The two ways we have suggested thinking about the propositions-as-types paradigm differ in a fundamental way. From the constructive point of view, proofs are abstract mathematical objects that are denoted by suitable expressions in dependent type theory. In contrast, if we think in terms of the coding trick described above, then the expressions themselves do not denote anything interesting. Rather, it is the fact that we can write them down and check that they are well-typed that ensures that the proposition in question is true. In other words, the expressions themselves are the proofs.
In the exposition below, we will slip back and forth between these two ways of talking, at times saying that an expression “constructs” or “produces” or “returns” a proof of a proposition, and at other times simply saying that it “is” such a proof. This is similar to the way that computer scientists occasionally blur the distinction between syntax and semantics by saying, at times, that a program “computes” a certain function, and at other times speaking as though the program “is” the function in question.
In any case, all that matters in the end is that the bottom line is
clear. To formally express a mathematical assertion in the language of
dependent type theory, we need to exhibit a term p : Prop
. To
prove that assertion, we need to exhibit a term t : p
. Lean’s
task, as a proof assistant, is to help us to construct such a term,
t
, and to verify that it is well-formed and has the correct type.
Lean also supports an alternative proof relevant kernel, which forms the basis for homotopy type theory. We will return to this topic in a later chapter.
In the propositions-as-types paradigm, theorems involving only →
can
be proved using lambda abstraction and application. In Lean, the
theorem
command introduces a new theorem:
constants p q : Prop
theorem t1 : p → q → p := λ Hp : p, λ Hq : q, Hp
This looks exactly like the definition of the constant function in the
last chapter, the only difference being that the arguments are
elements of Prop
rather than Type
. Intuitively, our proof of p →
q → p
assumes p
and q
are true, and uses the first hypothesis
(trivially) to establish that the conclusion, p
, is true.
Note that the theorem
command is really a version of the
definition
command: under the propositions and types correspondence,
proving the theorem p → q → p
is really the same as defining an
element of the associated type. To the kernel type checker, there is no
difference between the two.
There are a few pragmatic differences between definitions and theorems, however, that you will learn more about in Chapter Building Theories and Proofs. In normal circumstances, it is never necessary to unfold the “definition” of a theorem; by proof irrelevance, any two proofs of that theorem are definitionally equal. Once the proof of a theorem is complete, typically we only need to know that the proof exists; it doesn’t matter what the proof is. In light of that fact, Lean tags proofs as irreducible, which serves as a hint to the parser (more precisely, the elaborator) that there is generally no need to unfold it when processing a file. Moreover, for efficiency purposes, Lean treats theorems as axiomatic constants within the file in which they are defined. This makes it possible to process and check theorems in parallel, since theorems later in a file do not make use of the contents of earlier proofs.
As with definitions, the print
command will show you the proof of a
theorem, with a slight twist: if you want to print a theorem in the
same file in which it is defined, you need to use the reveal
command
to force Lean to use the theorem itself, rather than its axiomatic
surrogate.
constants p q : Prop
-- BEGIN
theorem t1 : p → q → p := λ Hp : p, λ Hq : q, Hp
reveal t1
print t1
-- END
(To save space, the online version of Lean does not store proofs of theorems in the library, so you cannot print them in the browser interface.)
Notice that the lambda abstractions Hp : p
and Hq : q
can be
viewed as temporary assumptions in the proof of t1
. Lean provides
the alternative syntax assume
for such a lambda abstraction:
constants p q : Prop
-- BEGIN
theorem t1 : p → q → p :=
assume Hp : p,
assume Hq : q,
Hp
-- END
Lean also allows us to specify the type of the final term Hp
,
explicitly, with a show
statement.
constants p q : Prop
-- BEGIN
theorem t1 : p → q → p :=
assume Hp : p,
assume Hq : q,
show p, from Hp
-- END
Adding such extra information can improve the clarity of a proof and
help detect errors when writing a proof. The show
command does
nothing more than annotate the type, and, internally, all the
presentations of t1
that we have seen produce the same term. Lean
also allows you to use the alternative syntax lemma
and corollary
instead of theorem:
constants p q : Prop
-- BEGIN
lemma t1 : p → q → p :=
assume Hp : p,
assume Hq : q,
show p, from Hp
-- END
As with ordinary definitions, one can move the lambda-abstracted variables to the left of the colon:
constants p q : Prop
-- BEGIN
theorem t1 (Hp : p) (Hq : q) : p := Hp
check t1 -- p → q → p
-- END
Now we can apply the theorem t1
just as a function application.
constants p q : Prop
theorem t1 (Hp : p) (Hq : q) : p := Hp
-- BEGIN
axiom Hp : p
theorem t2 : q → p := t1 Hp
-- END
Here, the axiom
command is alternative syntax for
constant
. Declaring a “constant” Hp : p
is tantamount to
declaring that p
is true, as witnessed by Hp
. Applying the theorem
t1 : p → q → p
to the fact Hp : p
that p
is true yields the
theorem t2 : q → p
.
Notice, by the way, that the original theorem t1
is true for any
propositions p
and q
, not just the particular constants
declared. So it would be more natural to define the theorem so that it
quantifies over those, too:
theorem t1 (p q : Prop) (Hp : p) (Hq : q) : p := Hp
check t1
The type of t1
is now ∀ p q : Prop, p → q → p
. We can read this as
the assertion “for every pair of propositions p q
, we have p → q →
p
”. The symbol ∀
is alternate syntax for Π
, and later we will see
how Pi types let us model universal quantifiers more generally. For
the moment, however, we will focus on theorems in propositional logic,
generalized over the propositions. We will tend to work in sections
with variables over the propositions, so that they are generalized for
us automatically.
When we generalize t1
in that way, we can then apply it to different
pairs of propositions, to obtain different instances of the general
theorem.
theorem t1 (p q : Prop) (Hp : p) (Hq : q) : p := Hp
variables p q r s : Prop
check t1 p q -- p → q → p
check t1 r s -- r → s → r
check t1 (r → s) (s → r) -- (r → s) → (s → r) → r → s
variable H : r → s
check t1 (r → s) (s → r) H -- (s → r) → r → s
Remember that under the propositions-as-types correspondence, a
variable H
of type r → s
can be viewed as the hypothesis, or
premise, that r → s
holds. For that reason, Lean offers the
alternative syntax, premise
, for variable
.
theorem t1 (p q : Prop) (Hp : p) (Hq : q) : p := Hp
variables p q r s : Prop
-- BEGIN
premise H : r → s
check t1 (r → s) (s → r) H
-- END
As another example, let us consider the composition function discussed in the last chapter, now with propositions instead of types.
variables p q r s : Prop
theorem t2 (H1 : q → r) (H2 : p → q) : p → r :=
assume H3 : p,
show r, from H1 (H2 H3)
As a theorem of propositional logic, what does t2
say?
Lean allows the alternative syntax premise
and premises
for variable
and variables
. This makes sense, of course, for
variables whose type is an element of Prop
. The following definition
of t2
has the same net effect as the preceding one.
variables p q r s : Prop
premises (H1 : q → r) (H2 : p → q)
theorem t2 : p → r :=
assume H3 : p,
show r, from H1 (H2 H3)
Lean defines all the standard logical connectives and notation. The propositional connectives come with the following notation:
Ascii | Unicode | Emacs shortcut for unicode | Definition |
---|---|---|---|
true | true | ||
false | false | ||
not | ¬ | \not , \neg | not |
/\ | ∧ | \and | and |
\/ | ∨ | \or | or |
-> | → | \to , \r , \implies | |
<-> | ↔ | \iff , \lr | iff |
They all take values in Prop
.
variables p q : Prop
check p → q → p ∧ q
check ¬p → p ↔ false
check p ∨ q → q ∨ p
The order of operations is fairly standard: unary negation ¬
binds
most strongly, then ∧
and ∨
, and finally →
and ↔
. For example,
a ∧ b → c ∨ d ∧ e
means (a ∧ b) → (c ∨ (d ∧ e))
. Remember that →
associates to the right (nothing changes now that the arguments are
elements of Prop
, instead of some other Type
), as do the other
binary connectives. So if we have p q r : Prop
, the expression p →
q → r
reads “if p
, then if q
, then r
.” This is just the
“curried” form of p ∧ q → r
.
In the last chapter we observed that lambda abstraction can be viewed
as an “introduction rule” for →
. In the current setting, it shows
how to “introduce” or establish an implication. Application can be
viewed as an “elimination rule,” showing how to “eliminate” or use an
implication in a proof. The other propositional connectives are
defined in the standard library in the file init.datatypes
, and
each comes with its canonical introduction and elimination rules.
The expression and.intro H1 H2
creates a proof for p ∧ q
using
proofs H1 : p
and H2 : q
. It is common to describe and.intro
as
the and-introduction rule. In the next example we use and.intro
to create a proof of p → q → p ∧ q
.
variables p q : Prop
-- BEGIN
example (Hp : p) (Hq : q) : p ∧ q := and.intro Hp Hq
check assume (Hp : p) (Hq : q), and.intro Hp Hq
-- END
The example
command states a theorem without naming it or storing it
in the permanent context. Essentially, it just checks that the given
term has the indicated type. It is convenient for illustration, and we
will use it often.
The expression and.elim_left H
creates a proof of p
from a proof
H : p ∧ q
. Similarly, and.elim_right H
is a proof of q
. They
are commonly known as the right and left and-elimination rules.
variables p q : Prop
-- BEGIN
example (H : p ∧ q) : p := and.elim_left H
example (H : p ∧ q) : q := and.elim_right H
-- END
Because they are so commonly used, the standard library provides the
abbreviations and.left
and and.right
for and.elim_left
and
and.elim_right
, respectively.
We can now prove p ∧ q → q ∧ p
with the following proof term.
variables p q : Prop
-- BEGIN
example (H : p ∧ q) : q ∧ p :=
and.intro (and.right H) (and.left H)
-- END
Notice that and-introduction and and-elimination are similar to the
pairing and projection operations for the cartesian product. The
difference is that given Hp : p
and Hq : q
, and.intro Hp Hq
has
type p ∧ q : Prop
, while pair Hp Hq
has type p × q : Type
. The
similarity between ∧
and ×
is another instance of the Curry-Howard
isomorphism, but in contrast to implication and the function space
constructor, ∧
and ×
are treated separately in Lean. With the
analogy, however, the proof we have just constructed is similar to a
function that swaps the elements of a pair.
The expression or.intro_left q Hp
creates a proof of p ∨ q
from a
proof Hp : p
. Similarly, or.intro_right p Hq
creates a proof for
p ∨ q
using a proof Hq : q
. These are the left and right
or-introduction rules.
variables p q : Prop
-- BEGIN
example (Hp : p) : p ∨ q := or.intro_left q Hp
example (Hq : q) : p ∨ q := or.intro_right p Hq
-- END
The or-elimination rule is slightly more complicated. The idea is
that we can prove r
from p ∨ q
, by showing that r
follows from
p
and that r
follows from q
. In other words, it is a proof “by
cases.” In the expression or.elim Hpq Hpr Hqr
, or.elim
takes three
arguments, Hpq : p ∨ q
, Hpr : p → r
and Hqr : q → r
, and
produces a proof of r
. In the following example, we use or.elim
to
prove p ∨ q → q ∨ p
.
variables p q r: Prop
-- BEGIN
example (H : p ∨ q) : q ∨ p :=
or.elim H
(assume Hp : p,
show q ∨ p, from or.intro_right q Hp)
(assume Hq : q,
show q ∨ p, from or.intro_left p Hq)
-- END
In most cases, the first argument of or.intro_right
and
or.intro_left
can be inferred automatically by Lean. Lean therefore
provides or.inr
and or.inl
as shorthands for or.intro_right _
and or.intro_left _
. Thus the proof term above could be written more
concisely:
variables p q r: Prop
-- BEGIN
example (H : p ∨ q) : q ∨ p := or.elim H (λ Hp, or.inr Hp) (λ Hq, or.inl Hq)
-- END
Notice that there is enough information in the full expression for
Lean to infer the types of Hp
and Hq
as well. But using the type
annotations in the longer version makes the proof more readable, and
can help catch and debug errors.
The expression not.intro H
produces a proof of ¬p
from H : p →
false
. That is, we obtain ¬p
if we can derive a contradiction from
p
. The expression not.elim Hnp Hp
produces a proof of false
from
Hp : p
and Hnp : ¬p
. The next example uses these rules to produce
a proof of (p → q) → ¬q → ¬p
.
variables p q : Prop
-- BEGIN
example (Hpq : p → q) (Hnq : ¬q) : ¬p :=
not.intro
(assume Hp : p,
show false, from not.elim Hnq (Hpq Hp))
-- END
In the standard library, ¬p
is actually an abbreviation for p →
false
, that is, the fact that p
implies a contradiction. You can
check that not.intro
then amounts to the introduction rule for
implication. Similarly, the rule not.elim
, that is, the principle
¬p → p → false
, corresponds to function application. In other words,
¬p → p → false
is derived by applying the first argument to the
second, with the term assume Hnp, assume Hp, Hnp Hp
. We can thus
avoid the use of not.intro
and not.elim
entirely, in favor of
abstraction and elimination:
variables p q : Prop
-- BEGIN
example (Hpq : p → q) (Hnq : ¬q) : ¬p :=
assume Hp : p, Hnq (Hpq Hp)
-- END
The connective false
has a single elimination rule, false.elim
,
which expresses the fact that anything follows from a contradiction.
This rule is sometimes called ex falso (short for ex falso sequitur
quodlibet), or the principle of explosion.
variables p q : Prop
-- BEGIN
example (Hp : p) (Hnp : ¬p) : q := false.elim (Hnp Hp)
-- END
The arbitrary fact, q
, that follows from falsity is an implicit
argument in false.elim
and is inferred automatically. This pattern,
deriving an arbitrary fact from contradictory hypotheses, is quite
common, and is represented by absurd
.
variables p q : Prop
-- BEGIN
example (Hp : p) (Hnp : ¬p) : q := absurd Hp Hnp
-- END
Here, for example, is a proof of ¬p → q → (q → p) → r
:
variables p q r : Prop
-- BEGIN
example (Hnp : ¬p) (Hq : q) (Hqp : q → p) : r :=
absurd (Hqp Hq) Hnp
-- END
Incidentally, just as false
has only an elimination rule, true
has
only an introduction rule, true.intro : true
, sometimes abbreviated
trivial : true
. In other words, true
is simply true, and has a
canonical proof, trivial
.
The expression iff.intro H1 H2
produces a proof of p ↔ q
from
H1 : p → q
and H2 : q → p
. The expression iff.elim_left H
produces a proof of p → q
from H : p ↔ q
. Similarly,
iff.elim_right H
produces a proof of q → p
from H : p ↔ q
. Here
is a proof of p ∧ q ↔ q ∧ p
:
variables p q : Prop
-- BEGIN
theorem and_swap : p ∧ q ↔ q ∧ p :=
iff.intro
(assume H : p ∧ q,
show q ∧ p, from and.intro (and.right H) (and.left H))
(assume H : q ∧ p,
show p ∧ q, from and.intro (and.right H) (and.left H))
check and_swap p q -- p ∧ q ↔ q ∧ p
-- END
Because they represent a form of modus ponens, iff.elim_left
and
iff.elim_right
can be abbreviated iff.mp
and iff.mpr
,
respectively. In the next example, we use that theorem to derive q ∧
p
from p ∧ q
:
variables p q : Prop
theorem and_swap : p ∧ q ↔ q ∧ p :=
iff.intro
(assume H : p ∧ q,
show q ∧ p, from and.intro (and.right H) (and.left H))
(assume H : q ∧ p,
show p ∧ q, from and.intro (and.right H) (and.left H))
-- BEGIN
premise H : p ∧ q
example : q ∧ p := iff.mp (and_swap p q) H
-- END
This is a good place to introduce another device Lean offers to help
structure long proofs, namely, the have
construct, which introduces
an auxiliary subgoal in a proof. Here is a small example, adapted from
the last section:
-- BEGIN
section
variables p q : Prop
example (H : p ∧ q) : q ∧ p :=
have Hp : p, from and.left H,
have Hq : q, from and.right H,
show q ∧ p, from and.intro Hq Hp
end
-- END
Internally, the expression have H : p, from s, t
produces the term
(λ (H : p), t) s
. In other words, s
is a proof of p
, t
is a
proof of the desired conclusion assuming H : p
, and the two are
combined by a lambda abstraction and application. This simple device
is extremely useful when it comes to structuring long
proofs, since we can use intermediate have
’s as stepping stones
leading to the final goal.
The introduction and elimination rules we have seen so far are all
constructive, which is to say, they reflect a computational
understanding of the logical connectives based on the
propositions-as-types correspondence. Ordinary classical logic adds to
this the law of the excluded middle, p ∨ ¬p
. To use this principle,
you have to load the appropriate classical axioms.
import logic.choice
variable p : Prop
check em p
Alternatively, you can simply write import classical
to import the
classical version of the standard library.
Intuitively, the constructive “or” is very strong: asserting p ∨ q
amounts to knowing which is the case. If RH
represents the Riemann
hypothesis, a classical mathematician is willing to assert RH ∨ ¬RH
,
even though we cannot yet assert either disjunct.
One consequence of the law of the excluded middle is the principle of double-negation elimination:
import classical
-- BEGIN
theorem dne {p : Prop} (H : ¬¬p) : p :=
or.elim (em p)
(assume Hp : p, Hp)
(assume Hnp : ¬p, absurd Hnp H)
-- END
Double-negation elimination allows one to prove any proposition, p
,
by assuming ¬p
and deriving false
, because that amounts to proving
¬¬p
. In other words, double-negation elimination allows one to carry
out a proof by contradiction, something which is not generally
possible in constructive logic. As an exercise, you might try proving
the converse, that is, showing that em
can be proved from dne
.
Loading the classical axioms also gives you access to additional
patterns of proof that can be justified by appeal to em
. For
example, one can carry out a proof by cases:
import classical
variable p : Prop
-- BEGIN
example (H : ¬¬p) : p :=
by_cases
(assume H1 : p, H1)
(assume H1 : ¬p, absurd H1 H)
-- END
Or you can carry out a proof by contradiction:
import classical
variable p : Prop
-- BEGIN
example (H : ¬¬p) : p :=
by_contradiction
(assume H1 : ¬p,
show false, from H H1)
-- END
If you are not used to thinking constructively, it may take some time
for you to get a sense of where classical reasoning is used. It is
needed in the following example because, from a constructive
standpoint, knowing that p
and q
are not both true does not
necessarily tell you which one is false:
import classical
variables p q : Prop
-- BEGIN
example (H : ¬ (p ∧ q)) : ¬ p ∨ ¬ q :=
or.elim (em p)
(assume Hp : p,
or.inr
(show ¬q, from
assume Hq : q,
H (and.intro Hp Hq)))
(assume Hp : ¬p,
or.inl Hp)
-- END
We will see later that there are situations in constructive logic
where principles like excluded middle and double-negation elimination
are permissible, and Lean supports the use of classical reasoning in
such contexts. Importing logic.axioms.classical
allows one to use
such reasoning freely, without any extra justification.
There are additional classical axioms that are not included by default in the standard library. We will discuss these in detail in Chapter Axioms and Computation.
Lean’s standard library contains proofs of many valid statements of propositional logic, all of which you are free to use in proofs of your own. In this section, we will review some common identities, and encourage you to try proving them on your own using the rules above.
The following is a long list of assertions in propositional
logic. Prove as many as you can, using the rules introduced above to
replace the sorry
placeholders by actual proofs. The ones that
require classical reasoning are grouped together at the end, while
the rest are constructively valid.
import classical
variables p q r s : Prop
-- commutativity of ∧ and ∨
example : p ∧ q ↔ q ∧ p := sorry
example : p ∨ q ↔ q ∨ p := sorry
-- associativity of ∧ and ∨
example : (p ∧ q) ∧ r ↔ p ∧ (q ∧ r) := sorry
example : (p ∨ q) ∨ r ↔ p ∨ (q ∨ r) := sorry
-- distributivity
example : p ∧ (q ∨ r) ↔ (p ∧ q) ∨ (p ∧ r) := sorry
example : p ∨ (q ∧ r) ↔ (p ∨ q) ∧ (p ∨ r) := sorry
-- other properties
example : (p → (q → r)) ↔ (p ∧ q → r) := sorry
example : ((p ∨ q) → r) ↔ (p → r) ∧ (q → r) := sorry
example : (p → r ∨ s) → ((p → r) ∨ (p → s)) := sorry
example : ¬(p ∨ q) ↔ ¬p ∧ ¬q := sorry
example : ¬p ∨ ¬q → ¬(p ∧ q) := sorry
example : ¬(p ∧ ¬ p) := sorry
example : p ∧ ¬q → ¬(p → q) := sorry
example : ¬p → (p → q) := sorry
example : (¬p ∨ q) → (p → q) := sorry
example : p ∨ false ↔ p := sorry
example : p ∧ false ↔ false := sorry
example : ¬(p ↔ ¬p) := sorry
example : (p → q) → (¬q → ¬p) := sorry
-- these require classical reasoning
example : (p → r ∨ s) → ((p → r) ∨ (p → s)) := sorry
example : ¬(p ∧ q) → ¬p ∨ ¬q := sorry
example : ¬(p → q) → p ∧ ¬q := sorry
example : (p → q) → (¬p ∨ q) := sorry
example : (¬q → ¬p) → (p → q) := sorry
example : p ∨ ¬p := sorry
example : (((p → q) → p) → p) := sorry
The sorry
identifier magically produces a proof of anything, or
provides an object of any data type at all. Of course, it is unsound
as a proof method – for example, you can use it to prove false
–
and Lean produces severe warnings when files use or import theorems
which depend on it. But it is very useful for building long proofs
incrementally. Start writing the proof from the top down, using
sorry
to fill in subproofs. Make sure Lean accepts the term with all
the sorry
’s; if not, there are errors that you need to correct. Then
go back and replace each sorry
with an actual proof, until no more
remain.
Here is another useful trick. Instead of using sorry
, you can use an
underscore _
as a placeholder. Recall that this tells Lean that the
argument is implicit, and should be filled in automatically. If Lean
tries to do so and fails, it returns with an error message “don’t know
how to synthesize placeholder.” This is followed by the type of the
term it is expecting, and all the objects and hypothesis available in
the context. In other words, for each unresolved placeholder, Lean
reports the subgoal that needs to be filled at that point. You can
then construct a proof by incrementally filling in these placeholders.
For reference, here are two sample proofs of validities taken from the list above.
import classical
variables p q r : Prop
-- distributivity
example : p ∧ (q ∨ r) ↔ (p ∧ q) ∨ (p ∧ r) :=
iff.intro
(assume H : p ∧ (q ∨ r),
have Hp : p, from and.left H,
or.elim (and.right H)
(assume Hq : q,
show (p ∧ q) ∨ (p ∧ r), from or.inl (and.intro Hp Hq))
(assume Hr : r,
show (p ∧ q) ∨ (p ∧ r), from or.inr (and.intro Hp Hr)))
(assume H : (p ∧ q) ∨ (p ∧ r),
or.elim H
(assume Hpq : p ∧ q,
have Hp : p, from and.left Hpq,
have Hq : q, from and.right Hpq,
show p ∧ (q ∨ r), from and.intro Hp (or.inl Hq))
(assume Hpr : p ∧ r,
have Hp : p, from and.left Hpr,
have Hr : r, from and.right Hpr,
show p ∧ (q ∨ r), from and.intro Hp (or.inr Hr)))
-- an example that requires classical reasoning
example : ¬(p ∧ ¬q) → (p → q) :=
assume H : ¬(p ∧ ¬q),
assume Hp : p,
show q, from
or.elim (em q)
(assume Hq : q, Hq)
(assume Hnq : ¬q, absurd (and.intro Hp Hnq) H)