diff --git a/Appendix.qmd b/Appendix.qmd index 28f956e..413408f 100644 --- a/Appendix.qmd +++ b/Appendix.qmd @@ -196,7 +196,7 @@ There is a `show` tactic in standard Lean, but it works a little differently fro | `⋃₀` | `\U0` | | `⋂₀` | `\I0` | | `\` | `\\` | -| `△` | `\bigtriangleup` | +| `∆` | `\symmdiff` | | `∅` | `\emptyset` | | `𝒫` | `\powerset` | | `·` | `\.` | diff --git a/Chap3.qmd b/Chap3.qmd index 5c0bcad..9570b06 100644 --- a/Chap3.qmd +++ b/Chap3.qmd @@ -496,7 +496,7 @@ You may plug in any value of type `U`, say `a`, for `x` and use this given to co This strategy says that if you have `h : ∀ (x : U), P x` and `a : U`, then you can infer `P a`. Indeed, in this situation Lean will recognize `h a` as a proof of `P a`. For example, you can write `have h' : P a := h a` in a Lean tactic-mode proof, and Lean will add `h' : P a` to the tactic state. Note that `a` here need not be simply a variable; it can be any expression denoting an object of type `U`. -Let's try these strategies out in a Lean proof. In Lean, if you don't want to give a theorem a name, you can simply call it an `example` rather than a `theorem`, and then there is no need to give it a name. In the following theorem, you can enter the symbol `∀` by typing `\forall` or `\all`, and you can enter `∃` by typing `\exists` or `\ex`. +Let's try these strategies out in a Lean proof. In Lean, if you don't want to give a theorem a name, you can simply call it an `example` rather than a `theorem`, and then there is no need to give it a name. In the following example, you can enter the symbol `∀` by typing `\forall` or `\all`, and you can enter `∃` by typing `\exists` or `\ex`. ::: {.inout} ::: {.inpt} @@ -657,7 +657,7 @@ Our next example is a theorem of set theory. You already know how to type a few | `∪` | `\union` or `\cup` | | `∩` | `\inter` or `\cap` | | `\` | `\\` | -| `△` | `\bigtriangleup` | +| `∆` | `\symmdiff` | | `∅` | `\emptyset` | | `𝒫` | `\powerset` | ::: @@ -3135,7 +3135,7 @@ theorem Exercise_3_5_18 (U : Type) (F G H : Set (Set U)) ::: {.numex arguments="7"} ```lean theorem Exercise_3_5_24a (U : Type) (A B C : Set U) : - (A ∪ B) △ C ⊆ (A △ C) ∪ (B △ C) := sorry + (A ∪ B) ∆ C ⊆ (A ∆ C) ∪ (B ∆ C) := sorry ``` ::: @@ -4007,15 +4007,15 @@ theorem union_comm {U : Type} (X Y : Set U) : done ``` -It takes a few seconds for Lean to search its library of theorems, but eventually a blue squiggle appears under `apply?`, indicating that the tactic has produced an answer. You will find the answer in the Infoview pane: `Try this: exact or_comm`. The word `exact` is the name of a tactic that we have not discussed; it is a shorthand for `show _ from`, where the blank gets filled in with the goal. Thus, you can think of `apply?`'s answer as a shortened form of the tactic +It takes a few seconds for Lean to search its library of theorems, but eventually a blue squiggle appears under `apply?`, indicating that the tactic has produced an answer. You will find the answer in the Infoview pane: `Try this: exact Or.comm`. The word `exact` is the name of a tactic that we have not discussed; it is a shorthand for `show _ from`, where the blank gets filled in with the goal. Thus, you can think of `apply?`'s answer as a shortened form of the tactic ::: {.ind} ``` -show x ∈ X ∨ x ∈ Y ↔ x ∈ Y ∨ x ∈ X from or_comm +show x ∈ X ∨ x ∈ Y ↔ x ∈ Y ∨ x ∈ X from Or.comm ``` ::: -Of course, this is precisely the step we used earlier to complete the proof. +The command `#check @Or.comm` will tell you that `Or.comm` is just an alternative name for the theorem `or_comm`. So the step suggested by the `apply?` tactic is essentially the same as the step we used earlier to complete the proof. Usually your proof will be more readable if you use the `show` tactic to state explicitly the goal that is being proven. This also gives Lean a chance to correct you if you have become confused about what goal you are proving. But sometimes---for example, if the goal is very long---it is convenient to use the `exact` tactic instead. You might think of `exact` as meaning "the following is a term-mode proof that is exactly what is needed to prove the goal." @@ -4039,6 +4039,7 @@ theorem Exercise_3_5_9 (U : Type) (A B : Set U) (h1 : 𝒫 (A ∪ B) = 𝒫 A ∪ 𝒫 B) : A ⊆ B ∨ B ⊆ A := by --Hint: Start like this: have h2 : A ∪ B ∈ 𝒫 (A ∪ B) := sorry + **done:: ``` ::: diff --git a/Chap6.qmd b/Chap6.qmd index d4feb37..6116123 100644 --- a/Chap6.qmd +++ b/Chap6.qmd @@ -890,7 +890,7 @@ How can we prove `2 ^ n > 0`? It is often helpful to think about whether there example (m n : Nat) (h : m > 0) : m ^ n > 0 := by ++apply?:: ``` -The `apply?` tactic comes up with `exact Nat.pos_pow_of_pos n h`, and `#check @pos_pow_of_pos` gives the result +The `apply?` tactic comes up with `exact Nat.pos_pow_of_pos n h`, and `#check @Nat.pos_pow_of_pos` gives the result ::: {.ind} ``` @@ -947,7 +947,7 @@ theorem Example_6_3_1 : ∀ n ≥ 4, fact n > 2 ^ n := by done ``` -The next example in *HTPI* is a proof of one of the laws of exponents: `a ^ (m + n) = a ^ m * a ^ n`. Lean's definition of exponentiation with natural number exponents is recursive. For some reason, the definitions are slightly different for different kinds of bases. The definitions Lean uses are essentially as follows: +The next example in *HTPI* is a proof of one of the laws of exponents: `a ^ (m + n) = a ^ m * a ^ n`. Lean's definition of exponentiation with natural number exponents is recursive. The definitions Lean uses are essentially as follows: ```lean --For natural numbers b and k, b ^ k = nat_pow b k: @@ -960,7 +960,7 @@ def nat_pow (b k : Nat) : Nat := def real_pow (b : Real) (k : Nat) : Real := match k with | 0 => 1 - | n + 1 => b * (real_pow b n) + | n + 1 => (real_pow b n) * b ``` Let's prove the addition law for exponents: @@ -994,10 +994,9 @@ theorem Example_6_3_2 : ∀ (a : Real) (m n : Nat), show a ^ (m + (n + 1)) = a ^ m * a ^ (n + 1) from calc a ^ (m + (n + 1)) _ = a ^ ((m + n) + 1) := by rw [add_assoc] - _ = a * a ^ (m + n) := by rfl - _ = a * (a ^ m * a ^ n) := by rw [ih] - _ = a ^ m * (a * a ^ n) := by - rw [←mul_assoc, mul_comm a, mul_assoc] + _ = a ^ (m + n) * a := by rfl + _ = (a ^ m * a ^ n) * a := by rw [ih] + _ = a ^ m * (a ^ n * a) := by rw [mul_assoc] _ = a ^ m * (a ^ (n + 1)) := by rfl done done @@ -1116,10 +1115,10 @@ theorem ??Example_6_3_4:: : ∀ (x : Real), x > -1 → rewrite [Nat.cast_succ] show (1 + x) ^ (n + 1) ≥ 1 + (n + 1) * x from calc (1 + x) ^ (n + 1) - _ = (1 + x) * (1 + x) ^ n := by rfl - _ ≥ (1 + x) * (1 + n * x) := sorry - _ = 1 + x + n * x + n * x ^ 2 := by ring - _ ≥ 1 + x + n * x + 0 := sorry + _ = (1 + x) ^ n * (1 + x) := by rfl + _ ≥ (1 + n * x) * (1 + x) := sorry + _ = 1 + n * x + x + n * x ^ 2 := by ring + _ ≥ 1 + n * x + x + 0 := sorry _ = 1 + (n + 1) * x := by ring done done @@ -1146,10 +1145,10 @@ theorem ??Example_6_3_4:: : ∀ (x : Real), x > -1 → have h2 : 0 ≤ 1 + x := by linarith show (1 + x) ^ (n + 1) ≥ 1 + (n + 1) * x from calc (1 + x) ^ (n + 1) - _ = (1 + x) * (1 + x) ^ n := by rfl - _ ≥ (1 + x) * (1 + n * x) := by rel [ih] - _ = 1 + x + n * x + n * x ^ 2 := by ring - _ ≥ 1 + x + n * x + 0 := sorry + _ = (1 + x) ^ n * (1 + x) := by rfl + _ ≥ (1 + n * x) * (1 + x) := by rel [ih] + _ = 1 + n * x + x + n * x ^ 2 := by ring + _ ≥ 1 + n * x + x + 0 := sorry _ = 1 + (n + 1) * x := by ring done done @@ -1159,8 +1158,9 @@ For the second `sorry` step, we'll need to know that `n * x ^ 2 ≥ 0`. To prov ::: {.ind} ``` -@sq_nonneg : ∀ {R : Type u_1} [inst : LinearOrderedRing R] - (a : R), 0 ≤ a ^ 2 +@sq_nonneg : ∀ {α : Type u_1} [inst : LinearOrderedSemiring α] + [inst_1 : ExistsAddOfLE α] + (a : α), 0 ≤ a ^ 2 ``` ::: @@ -1188,10 +1188,10 @@ theorem Example_6_3_4 : ∀ (x : Real), x > -1 → _ = 0 := by ring show (1 + x) ^ (n + 1) ≥ 1 + (n + 1) * x from calc (1 + x) ^ (n + 1) - _ = (1 + x) * (1 + x) ^ n := by rfl - _ ≥ (1 + x) * (1 + n * x) := by rel [ih] - _ = 1 + x + n * x + n * x ^ 2 := by ring - _ ≥ 1 + x + n * x + 0 := by rel [h4] + _ = (1 + x) ^ n * (1 + x) := by rfl + _ ≥ (1 + n * x) * (1 + x) := by rel [ih] + _ = 1 + n * x + x + n * x ^ 2 := by ring + _ ≥ 1 + n * x + x + 0 := by rel [h4] _ = 1 + (n + 1) * x := by ring done done @@ -1509,7 +1509,7 @@ theorem well_ord_princ (S : Set Nat) : (∃ (n : Nat), n ∈ S) → done ``` -Section 6.4 of *HTPI* ends with an example of an application of the well ordering principle. The example gives a proof that $\sqrt{2}$ is irrational. If $\sqrt{2}$ were rational, then there would be natural numbers $p$ and $q$ such that $q \ne 0$ and $p/q = \sqrt{2}$, and therefore $p^2 = 2q^2$. So we can prove that $\sqrt{2}$ is irrational by showing that there do not exist natural numbers $p$ and $q$ such that $q \ne 0$ and $p^2 = 2q^2$. +Section 6.4 of *HTPI* ends with an example of an application of the well-ordering principle. The example gives a proof that $\sqrt{2}$ is irrational. If $\sqrt{2}$ were rational, then there would be natural numbers $p$ and $q$ such that $q \ne 0$ and $p/q = \sqrt{2}$, and therefore $p^2 = 2q^2$. So we can prove that $\sqrt{2}$ is irrational by showing that there do not exist natural numbers $p$ and $q$ such that $q \ne 0$ and $p^2 = 2q^2$. The proof uses a definition from the exercises of Section 6.1: @@ -1529,7 +1529,7 @@ And we'll need another theorem that we haven't seen before: ``` @mul_left_cancel_iff_of_pos : ∀ {α : Type u_1} {a b c : α} [inst : MulZeroClass α] [inst_1 : PartialOrder α] - [inst_2 : PosMulMonoRev α], + [inst_2 : PosMulReflectLE α], 0 < a → (a * b = a * c ↔ b = c) ``` ::: @@ -1549,7 +1549,7 @@ S = {q : Nat | ∃ (p : Nat), p * p = 2 * (q * q) ∧ q ≠ 0} ``` ::: -would be nonempty, and therefore, by the well ordering principle, it would have a smallest element. We then show that this leads to a contradiction. Here is the proof. +would be nonempty, and therefore, by the well-ordering principle, it would have a smallest element. We then show that this leads to a contradiction. Here is the proof. ```lean theorem Theorem_6_4_5 : diff --git a/Chap7.qmd b/Chap7.qmd index c49049e..4cd6fd6 100644 --- a/Chap7.qmd +++ b/Chap7.qmd @@ -32,13 +32,13 @@ theorem dvd_a_of_dvd_b_mod {a b d : Nat} These theorems tell us that the gcd of `a` and `b` is the same as the gcd of `b` and `a % b`, which suggests that the following recursive definition should compute the gcd of `a` and `b`: ```lean -def **gcd:: (a b : Nat) : Nat := +def gcd (a b : Nat) : Nat := match b with | 0 => a - | n + 1 => gcd (n + 1) (a % (n + 1)) + | n + 1 => **gcd (n + 1) (a % (n + 1)):: ``` -Unfortunately, Lean puts a red squiggle under `gcd`, and it displays in the Infoview a long error message that begins `fail to show termination`. What is Lean complaining about? +Unfortunately, Lean puts a red squiggle under `gcd (n + 1) (a % (n + 1))`, and it displays in the Infoview a long error message that begins `fail to show termination`. What is Lean complaining about? The problem is that recursive definitions are dangerous. To understand the danger, consider the following recursive definition: @@ -58,17 +58,17 @@ Clearly this calculation will go on forever and will never produce an answer. S Lean insists that recursive definitions must avoid such nonterminating calculations. Why did it accept all of our previous recursive definitions? The reason is that in each case, the definition of the value of the function at a natural number `n` referred only to values of the function at numbers smaller than `n`. Since a decreasing list of natural numbers cannot go on forever, such definitions lead to calculations that are guaranteed to terminate. -What about our recursive definition of `gcd a b`? This function has two arguments, `a` and `b`, and when `b = n + 1`, the definition asks us to compute `gcd (n + 1) (a % (n + 1))`. The first argument here could actually be larger than the first argument in the value we are trying to compute, `gcd a b`. But the second argument will always be smaller, and that will suffice to guarantee that the calculation terminates. We can tell Lean to focus on the second argument by adding a `termination_by` clause to the end of our recursive definition: +What about our recursive definition of `gcd a b`? This function has two arguments, `a` and `b`, and when `b = n + 1`, the definition asks us to compute `gcd (n + 1) (a % (n + 1))`. The first argument here could actually be larger than the first argument in the value we are trying to compute, `gcd a b`. But the second argument will always be smaller, and that will suffice to guarantee that the calculation terminates. We can tell Lean to focus on the second argument `b` by adding a `termination_by` clause to the end of our recursive definition: ```lean def gcd (a b : Nat) : Nat := match b with | 0 => a | n + 1 => **gcd (n + 1) (a % (n + 1)):: - termination_by gcd a b => b + termination_by b ``` -Unfortunately, Lean still isn't satisfied, but the error message this time is more helpful. The message says that Lean failed to prove termination, and at the end of the message it says that the goal it failed to prove is `a % (n + 1) < Nat.succ n`. Here `Nat.succ n` denotes the successor of `n`---that is, `n + 1`---so Lean was trying to prove that `a % (n + 1) < n + 1`, which is precisely what is needed to show that the second argument of `gcd (n + 1) (a % (n + 1))` is smaller than the second argument of `gcd a b` when `b = n + 1`. We'll need to provide a proof of this goal to convince Lean to accept our recursive definition. Fortunately, it's not hard to prove: +Unfortunately, Lean still isn't satisfied, but the error message this time is more helpful. The message says that Lean failed to prove termination, and at the end of the message it says that the goal it failed to prove is `a % (n + 1) < n + 1`, which is precisely what is needed to show that the second argument of `gcd (n + 1) (a % (n + 1))` is smaller than the second argument of `gcd a b` when `b = n + 1`. We'll need to provide a proof of this goal to convince Lean to accept our recursive definition. Fortunately, it's not hard to prove: ```lean lemma mod_succ_lt (a n : Nat) : a % (n + 1) < n + 1 := by @@ -86,7 +86,7 @@ def gcd (a b : Nat) : Nat := | n + 1 => have : a % (n + 1) < n + 1 := mod_succ_lt a n gcd (n + 1) (a % (n + 1)) - termination_by gcd a b => b + termination_by b ``` Notice that in the `have` expression, we have not bothered to specify an identifier for the assertion being proven, since we never need to refer to it. Let's try out our `gcd` function: @@ -183,10 +183,11 @@ mutual def gcd_c1 (a b : Nat) : Int := match b with | 0 => 1 - | n + 1 => + | n + 1 => have : a % (n + 1) < n + 1 := mod_succ_lt a n gcd_c2 (n + 1) (a % (n + 1)) - --Corresponds to s = t' in (*) + --Corresponds to s = t' + termination_by b def gcd_c2 (a b : Nat) : Int := match b with @@ -195,11 +196,9 @@ mutual have : a % (n + 1) < n + 1 := mod_succ_lt a n gcd_c1 (n + 1) (a % (n + 1)) - (gcd_c2 (n + 1) (a % (n + 1))) * ↑(a / (n + 1)) - --Corresponds to t = s' - t'q in (*) + --Corresponds to t = s' - t'q + termination_by b end - termination_by - gcd_c1 a b => b - gcd_c2 a b => b ``` Notice that in the definition of `gcd_c2`, the quotient `a / (n + 1)` is computed using natural-number division, but it is then coerced to be an integer so that it can be multiplied by the integer `gcd_c2 (n + 1) (a % (n + 1))`. @@ -265,12 +264,12 @@ We can try out the functions `gcd_c1` and `gcd_c2` as follows: --Note 6 * 672 - 25 * 161 = 4032 - 4025 = 7 = gcd 672 161 ``` -Finally, we turn to Theorem 7.1.6 in *HTPI*, which expresses one of the senses in which `gcd a b` is the *greatest* common divisor of `a` and `b`. Our proof follows the strategy of the proof in *HTPI*, with one additional step: we begin by using the theorem `Int.coe_nat_dvd` to change the goal from `d ∣ gcd a b` to `↑d ∣ ↑(gcd a b)` (where the coercions are from `Nat` to `Int`), so that the rest of the proof can work with integer algebra rather than natural-number algebra. +Finally, we turn to Theorem 7.1.6 in *HTPI*, which expresses one of the senses in which `gcd a b` is the *greatest* common divisor of `a` and `b`. Our proof follows the strategy of the proof in *HTPI*, with one additional step: we begin by using the theorem `Int.natCast_dvd_natCast` to change the goal from `d ∣ gcd a b` to `↑d ∣ ↑(gcd a b)` (where the coercions are from `Nat` to `Int`), so that the rest of the proof can work with integer algebra rather than natural-number algebra. ```lean theorem Theorem_7_1_6 {d a b : Nat} (h1 : d ∣ a) (h2 : d ∣ b) : d ∣ gcd a b := by - rewrite [←Int.coe_nat_dvd] --Goal : ↑d ∣ ↑(gcd a b) + rewrite [←Int.natCast_dvd_natCast] --Goal : ↑d ∣ ↑(gcd a b) set s : Int := gcd_c1 a b set t : Int := gcd_c2 a b have h3 : s * ↑a + t * ↑b = ↑(gcd a b) := gcd_lin_comb b a @@ -408,7 +407,7 @@ lemma exists_prime_factor : ∀ (n : Nat), 2 ≤ n → done ``` -Of course, by the well ordering principle, an immediate consequence of this lemma is that every number greater than or equal to 2 has a *smallest* prime factor. +Of course, by the well-ordering principle, an immediate consequence of this lemma is that every number greater than or equal to 2 has a *smallest* prime factor. ```lean lemma exists_least_prime_factor {n : Nat} (h : 2 ≤ n) : @@ -535,7 +534,7 @@ Before we can prove the existence of prime factorizations, we will need one more List.length (a :: as) = Nat.succ (List.length as) @List.exists_cons_of_ne_nil : ∀ {α : Type u_1} {l : List α}, - l ≠ [] → ∃ (b : α), ∃ (L : List α), l = b :: L + l ≠ [] → ∃ (b : α) (L : List α), l = b :: L ``` ::: @@ -704,7 +703,7 @@ def rel_prime (a b : Nat) : Prop := gcd a b = 1 theorem Theorem_7_2_2 {a b c : Nat} (h1 : c ∣ a * b) (h2 : rel_prime a c) : c ∣ b := by - rewrite [←Int.coe_nat_dvd] --Goal : ↑c ∣ ↑b + rewrite [←Int.natCast_dvd_natCast] --Goal : ↑c ∣ ↑b define at h1; define at h2; define obtain (j : Nat) (h3 : a * b = c * j) from h1 set s : Int := gcd_c1 a c @@ -2085,7 +2084,7 @@ We will also need to use Lemma 7.4.5 from *HTPI*. To prove that lemma in Lean, ::: {.ind} ``` -@Int.coe_nat_dvd_left : ∀ {n : ℕ} {z : ℤ}, ↑n ∣ z ↔ n ∣ Int.natAbs z +@Int.natCast_dvd : ∀ {n : ℤ} {m : ℕ}, ↑m ∣ n ↔ m ∣ Int.natAbs n Int.natAbs_mul : ∀ (a b : ℤ), Int.natAbs (a * b) = Int.natAbs a * Int.natAbs b @@ -2099,9 +2098,9 @@ With the help of these theorems, our extended version of `Theorem_7_2_2` follows ```lean theorem Theorem_7_2_2_Int {a c : Nat} {b : Int} (h1 : ↑c ∣ ↑a * b) (h2 : rel_prime a c) : ↑c ∣ b := by - rewrite [Int.coe_nat_dvd_left, Int.natAbs_mul, + rewrite [Int.natCast_dvd, Int.natAbs_mul, Int.natAbs_ofNat] at h1 --h1 : c ∣ a * Int.natAbs b - rewrite [Int.coe_nat_dvd_left] --Goal : c ∣ Int.natAbs b + rewrite [Int.natCast_dvd] --Goal : c ∣ Int.natAbs b show c ∣ Int.natAbs b from Theorem_7_2_2 h1 h2 done ``` @@ -2181,10 +2180,9 @@ lemma Lemma_7_5_1 {p e d m c s : Nat} {t : Int} ring done have h8 : [m]_p = [0]_p := (cc_eq_iff_congr _ _ _).rtl h7 - have h9 : 0 < (e * d) := by + have h9 : e * d ≠ 0 := by rewrite [h2] - have h10 : 0 ≤ (p - 1) * s := Nat.zero_le _ - linarith + show (p - 1) * s + 1 ≠ 0 from Nat.add_one_ne_zero _ done have h10 : (0 : Int) ^ (e * d) = 0 := zero_pow h9 have h11 : [c ^ d]_p = [m]_p := @@ -2196,7 +2194,7 @@ lemma Lemma_7_5_1 {p e d m c s : Nat} {t : Int} _ = [0 ^ (e * d)]_p := Exercise_7_4_5_Int _ _ _ _ = [0]_p := by rw [h10] _ = [m]_p := by rw [h8] - show c ^ d ≡ m (MOD p) from (cc_eq_iff_congr _ _ _).ltr h11 + show c ^ d ≡ m (MOD p) from (cc_eq_iff_congr _ _ _).ltr h11 done · -- Case 2. h6 : ¬p ∣ m have h7 : rel_prime m p := rel_prime_of_prime_not_dvd h1 h6 @@ -2216,7 +2214,7 @@ lemma Lemma_7_5_1 {p e d m c s : Nat} {t : Int} _ = [1]_p * [m]_p := by rw [h10] _ = [m]_p * [1]_p := by ring _ = [m]_p := Theorem_7_3_6_7 _ - show c ^ d ≡ m (MOD p) from (cc_eq_iff_congr _ _ _).ltr h11 + show c ^ d ≡ m (MOD p) from (cc_eq_iff_congr _ _ _).ltr h11 done done ``` diff --git a/Chap8.qmd b/Chap8.qmd index f76418e..d0dcddf 100644 --- a/Chap8.qmd +++ b/Chap8.qmd @@ -670,7 +670,7 @@ theorem Theorem_8_1_5_1_to_2 {U : Type} {A : Set U} (h1 : ctble A) : done ``` -For the proof of 2 → 3, suppose we have `A : Set U` and `S : Rel Nat U`, and the statement `fcnl_onto_from_nat S A` is true. We need to come up with a relation `R : Rel U Nat` for which we can prove `fcnl_one_one_to_nat R A`. You might be tempted to try `R = invRel S`, but there is a problem with this choice: if `x ∈ A`, there might be multiple natural numbers `n` such that `S n x` holds, but we must make sure that there is only one `n` for which `R x n` holds. Our solution to this problem will be to define `R x n` to mean that `n` is the smallest natural number for which `S n x` holds. (The proof in *HTPI* uses a similar idea.) The well ordering principle guarantees that there always is such a smallest element. +For the proof of 2 → 3, suppose we have `A : Set U` and `S : Rel Nat U`, and the statement `fcnl_onto_from_nat S A` is true. We need to come up with a relation `R : Rel U Nat` for which we can prove `fcnl_one_one_to_nat R A`. You might be tempted to try `R = invRel S`, but there is a problem with this choice: if `x ∈ A`, there might be multiple natural numbers `n` such that `S n x` holds, but we must make sure that there is only one `n` for which `R x n` holds. Our solution to this problem will be to define `R x n` to mean that `n` is the smallest natural number for which `S n x` holds. (The proof in *HTPI* uses a similar idea.) The well-ordering principle guarantees that there always is such a smallest element. ```lean def least_rel_to {U : Type} (S : Rel Nat U) (x : U) (n : Nat) : Prop := @@ -846,7 +846,7 @@ lemma fqn_one_one : one_to_one fqn := by have h3 : fzn q1.num = fzn q2.num ∧ q1.den = q2.den := Prod.mk.inj h2 have h4 : q1.num = q2.num := fzn_one_one _ _ h3.left - show q1 = q2 from Rat.ext q1 q2 h4 h3.right + show q1 = q2 from Rat.ext h4 h3.right done lemma image_fqn_unbdd : diff --git a/docs/Appendix.html b/docs/Appendix.html index 89afb18..f702c1b 100644 --- a/docs/Appendix.html +++ b/docs/Appendix.html @@ -234,7 +234,7 @@

In This Chapter

@@ -599,8 +599,8 @@

Typing Symbols

\\ - -\bigtriangleup + +\symmdiff diff --git a/docs/Chap1.html b/docs/Chap1.html index 695b7b2..6ec40d9 100644 --- a/docs/Chap1.html +++ b/docs/Chap1.html @@ -165,7 +165,7 @@

1&nbs diff --git a/docs/Chap2.html b/docs/Chap2.html index 6114c23..c2b1c6a 100644 --- a/docs/Chap2.html +++ b/docs/Chap2.html @@ -165,7 +165,7 @@

2&nbs diff --git a/docs/Chap3.html b/docs/Chap3.html index bd30d87..f2797dd 100644 --- a/docs/Chap3.html +++ b/docs/Chap3.html @@ -239,7 +239,7 @@

In This Chapter

@@ -643,7 +643,7 @@

To use

You may plug in any value of type U, say a, for x and use this given to conclude that P a is true.

This strategy says that if you have h : ∀ (x : U), P x and a : U, then you can infer P a. Indeed, in this situation Lean will recognize h a as a proof of P a. For example, you can write have h' : P a := h a in a Lean tactic-mode proof, and Lean will add h' : P a to the tactic state. Note that a here need not be simply a variable; it can be any expression denoting an object of type U.

-

Let’s try these strategies out in a Lean proof. In Lean, if you don’t want to give a theorem a name, you can simply call it an example rather than a theorem, and then there is no need to give it a name. In the following theorem, you can enter the symbol by typing \forall or \all, and you can enter by typing \exists or \ex.

+

Let’s try these strategies out in a Lean proof. In Lean, if you don’t want to give a theorem a name, you can simply call it an example rather than a theorem, and then there is no need to give it a name. In the following example, you can enter the symbol by typing \forall or \all, and you can enter by typing \exists or \ex.

example (U : Type) (P Q : Pred U)
@@ -824,8 +824,8 @@ 

To use \\ - -\bigtriangleup + +\symmdiff @@ -2714,7 +2714,7 @@

Exercises

theorem Exercise_3_5_24a (U : Type) (A B C : Set U) :
-    (A ∪ B) △ C ⊆ (A △ C) ∪ (B △ C) := sorry
+ (A ∪ B) ∆ C ⊆ (A ∆ C) ∪ (B ∆ C) := sorry
@@ -3392,11 +3392,11 @@

To us define : x ∈ Y ∪ X ++apply?:: done

-

It takes a few seconds for Lean to search its library of theorems, but eventually a blue squiggle appears under apply?, indicating that the tactic has produced an answer. You will find the answer in the Infoview pane: Try this: exact or_comm. The word exact is the name of a tactic that we have not discussed; it is a shorthand for show _ from, where the blank gets filled in with the goal. Thus, you can think of apply?’s answer as a shortened form of the tactic

+

It takes a few seconds for Lean to search its library of theorems, but eventually a blue squiggle appears under apply?, indicating that the tactic has produced an answer. You will find the answer in the Infoview pane: Try this: exact Or.comm. The word exact is the name of a tactic that we have not discussed; it is a shorthand for show _ from, where the blank gets filled in with the goal. Thus, you can think of apply?’s answer as a shortened form of the tactic

-
show x ∈ X ∨ x ∈ Y ↔ x ∈ Y ∨ x ∈ X from or_comm
+
show x ∈ X ∨ x ∈ Y ↔ x ∈ Y ∨ x ∈ X from Or.comm
-

Of course, this is precisely the step we used earlier to complete the proof.

+

The command #check @Or.comm will tell you that Or.comm is just an alternative name for the theorem or_comm. So the step suggested by the apply? tactic is essentially the same as the step we used earlier to complete the proof.

Usually your proof will be more readable if you use the show tactic to state explicitly the goal that is being proven. This also gives Lean a chance to correct you if you have become confused about what goal you are proving. But sometimes—for example, if the goal is very long—it is convenient to use the exact tactic instead. You might think of exact as meaning “the following is a term-mode proof that is exactly what is needed to prove the goal.”

The apply? tactic has not only come up with a suggested tactic, it has applied that tactic, and the proof is now complete. You can confirm that the tactic completes the proof by replacing the line apply? in the proof with apply?’s suggested exact tactic.

The apply? tactic is somewhat unpredictable; sometimes it is able to find the right theorem in the library, and sometimes it isn’t. But it is always worth a try. Another way to try to find theorems is to visit the documentation page for Lean’s mathematics library, which can be found at https://leanprover-community.github.io/mathlib4_docs/.

@@ -3413,7 +3413,8 @@

Exercises

(h1 : 𝒫 (A ∪ B) = 𝒫 A ∪ 𝒫 B) : A ⊆ B ∨ B ⊆ A := by --Hint: Start like this: have h2 : A ∪ B ∈ 𝒫 (A ∪ B) := sorry - **done:: + + **done::
theorem Exercise_3_6_6b (U : Type) :
diff --git a/docs/Chap4.html b/docs/Chap4.html
index 3b99727..cf9df52 100644
--- a/docs/Chap4.html
+++ b/docs/Chap4.html
@@ -238,7 +238,7 @@ 

In This Chapter

diff --git a/docs/Chap5.html b/docs/Chap5.html index 14f9a44..72bba00 100644 --- a/docs/Chap5.html +++ b/docs/Chap5.html @@ -238,7 +238,7 @@

In This Chapter

diff --git a/docs/Chap6.html b/docs/Chap6.html index 32e1c29..cbc4362 100644 --- a/docs/Chap6.html +++ b/docs/Chap6.html @@ -238,7 +238,7 @@

In This Chapter

@@ -902,7 +902,7 @@

6.3. Recursion

So we have three inequalities that we need to prove before we can justify the steps of the calculational proof: n + 1 > 0, n + 1 > 2, and 2 ^ n > 0. We’ll insert have steps before the calculational proof to assert these three inequalities. If you try it, you’ll find that linarith can prove the first two, but not the third.

How can we prove 2 ^ n > 0? It is often helpful to think about whether there is a general principle that is behind a statement we are trying to prove. In our case, the inequality 2 ^ n > 0 is an instance of the general fact that if m and n are any natural numbers with m > 0, then m ^ n > 0. Maybe that fact is in Lean’s library:

example (m n : Nat) (h : m > 0) : m ^ n > 0 := by ++apply?::
-

The apply? tactic comes up with exact Nat.pos_pow_of_pos n h, and #check @pos_pow_of_pos gives the result

+

The apply? tactic comes up with exact Nat.pos_pow_of_pos n h, and #check @Nat.pos_pow_of_pos gives the result

@Nat.pos_pow_of_pos : ∀ {n : ℕ} (m : ℕ), 0 < n → 0 < n ^ m
@@ -947,7 +947,7 @@

6.3. Recursion

_ = 2 ^ (n + 1) := by ring done done
-

The next example in HTPI is a proof of one of the laws of exponents: a ^ (m + n) = a ^ m * a ^ n. Lean’s definition of exponentiation with natural number exponents is recursive. For some reason, the definitions are slightly different for different kinds of bases. The definitions Lean uses are essentially as follows:

+

The next example in HTPI is a proof of one of the laws of exponents: a ^ (m + n) = a ^ m * a ^ n. Lean’s definition of exponentiation with natural number exponents is recursive. The definitions Lean uses are essentially as follows:

--For natural numbers b and k, b ^ k = nat_pow b k:
 def nat_pow (b k : Nat) : Nat :=
   match k with
@@ -958,7 +958,7 @@ 

6.3. Recursion

def real_pow (b : Real) (k : Nat) : Real := match k with | 0 => 1 - | n + 1 => b * (real_pow b n)
+ | n + 1 => (real_pow b n) * b

Let’s prove the addition law for exponents:

theorem Example_6_3_2_cheating : ∀ (a : Real) (m n : Nat),
     a ^ (m + n) = a ^ m * a ^ n := by
@@ -984,13 +984,12 @@ 

6.3. Recursion

show a ^ (m + (n + 1)) = a ^ m * a ^ (n + 1) from calc a ^ (m + (n + 1)) _ = a ^ ((m + n) + 1) := by rw [add_assoc] - _ = a * a ^ (m + n) := by rfl - _ = a * (a ^ m * a ^ n) := by rw [ih] - _ = a ^ m * (a * a ^ n) := by - rw [←mul_assoc, mul_comm a, mul_assoc] - _ = a ^ m * (a ^ (n + 1)) := by rfl - done - done
+ _ = a ^ (m + n) * a := by rfl + _ = (a ^ m * a ^ n) * a := by rw [ih] + _ = a ^ m * (a ^ n * a) := by rw [mul_assoc] + _ = a ^ m * (a ^ (n + 1)) := by rfl + done + done

Finally, we’ll prove the theorem in Example 6.3.4 of HTPI, which again involves exponentiation with natural number exponents. Here’s the beginning of the proof:

@@ -1079,10 +1078,10 @@

6.3. Recursion

rewrite [Nat.cast_succ] show (1 + x) ^ (n + 1) ≥ 1 + (n + 1) * x from calc (1 + x) ^ (n + 1) - _ = (1 + x) * (1 + x) ^ n := by rfl - _ ≥ (1 + x) * (1 + n * x) := sorry - _ = 1 + x + n * x + n * x ^ 2 := by ring - _ ≥ 1 + x + n * x + 0 := sorry + _ = (1 + x) ^ n * (1 + x) := by rfl + _ ≥ (1 + n * x) * (1 + x) := sorry + _ = 1 + n * x + x + n * x ^ 2 := by ring + _ ≥ 1 + n * x + x + 0 := sorry _ = 1 + (n + 1) * x := by ring done done
@@ -1104,17 +1103,18 @@

6.3. Recursion

have h2 : 0 ≤ 1 + x := by linarith show (1 + x) ^ (n + 1) ≥ 1 + (n + 1) * x from calc (1 + x) ^ (n + 1) - _ = (1 + x) * (1 + x) ^ n := by rfl - _ ≥ (1 + x) * (1 + n * x) := by rel [ih] - _ = 1 + x + n * x + n * x ^ 2 := by ring - _ ≥ 1 + x + n * x + 0 := sorry + _ = (1 + x) ^ n * (1 + x) := by rfl + _ ≥ (1 + n * x) * (1 + x) := by rel [ih] + _ = 1 + n * x + x + n * x ^ 2 := by ring + _ ≥ 1 + n * x + x + 0 := sorry _ = 1 + (n + 1) * x := by ring done done

For the second sorry step, we’ll need to know that n * x ^ 2 ≥ 0. To prove it, we start with the fact that the square of any real number is nonnegative:

-
@sq_nonneg : ∀ {R : Type u_1} [inst : LinearOrderedRing R]
-              (a : R), 0 ≤ a ^ 2
+
@sq_nonneg : ∀ {α : Type u_1} [inst : LinearOrderedSemiring α]
+              [inst_1 : ExistsAddOfLE α]
+              (a : α), 0 ≤ a ^ 2

As usual, we don’t need to pay much attention to the implicit arguments; what is important is the last line, which tells us that sq_nonneg x is a proof of x ^ 2 ≥ 0. To get n * x ^ 2 ≥ 0 we just have to multiply both sides by n, which we can justify with the rel tactic, and then one more application of rel will handle the remaining sorry. Here is the complete proof:

theorem Example_6_3_4 : ∀ (x : Real), x > -1 →
@@ -1138,10 +1138,10 @@ 

6.3. Recursion

_ = 0 := by ring show (1 + x) ^ (n + 1) ≥ 1 + (n + 1) * x from calc (1 + x) ^ (n + 1) - _ = (1 + x) * (1 + x) ^ n := by rfl - _ ≥ (1 + x) * (1 + n * x) := by rel [ih] - _ = 1 + x + n * x + n * x ^ 2 := by ring - _ ≥ 1 + x + n * x + 0 := by rel [h4] + _ = (1 + x) ^ n * (1 + x) := by rfl + _ ≥ (1 + n * x) * (1 + x) := by rel [ih] + _ = 1 + n * x + x + n * x ^ 2 := by ring + _ ≥ 1 + n * x + x + 0 := by rel [h4] _ = 1 + (n + 1) * x := by ring done done
@@ -1367,7 +1367,7 @@

To have h5 : ¬m < n := h4 h3 linarith done -

Section 6.4 of HTPI ends with an example of an application of the well ordering principle. The example gives a proof that \(\sqrt{2}\) is irrational. If \(\sqrt{2}\) were rational, then there would be natural numbers \(p\) and \(q\) such that \(q \ne 0\) and \(p/q = \sqrt{2}\), and therefore \(p^2 = 2q^2\). So we can prove that \(\sqrt{2}\) is irrational by showing that there do not exist natural numbers \(p\) and \(q\) such that \(q \ne 0\) and \(p^2 = 2q^2\).

+

Section 6.4 of HTPI ends with an example of an application of the well-ordering principle. The example gives a proof that \(\sqrt{2}\) is irrational. If \(\sqrt{2}\) were rational, then there would be natural numbers \(p\) and \(q\) such that \(q \ne 0\) and \(p/q = \sqrt{2}\), and therefore \(p^2 = 2q^2\). So we can prove that \(\sqrt{2}\) is irrational by showing that there do not exist natural numbers \(p\) and \(q\) such that \(q \ne 0\) and \(p^2 = 2q^2\).

The proof uses a definition from the exercises of Section 6.1:

def nat_even (n : Nat) : Prop := ∃ (k : Nat), n = 2 * k

We will also use the following lemma, whose proof we leave as an exercise for you:

@@ -1376,7 +1376,7 @@

To
@mul_left_cancel_iff_of_pos : ∀ {α : Type u_1} {a b c : α}
                     [inst : MulZeroClass α] [inst_1 : PartialOrder α]
-                    [inst_2 : PosMulMonoRev α],
+                    [inst_2 : PosMulReflectLE α],
                     0 < a → (a * b = a * c ↔ b = c)

To show that \(\sqrt{2}\) is irrational, we will prove the statement

@@ -1387,7 +1387,7 @@

To
S = {q : Nat | ∃ (p : Nat), p * p = 2 * (q * q) ∧ q ≠ 0}
-

would be nonempty, and therefore, by the well ordering principle, it would have a smallest element. We then show that this leads to a contradiction. Here is the proof.

+

would be nonempty, and therefore, by the well-ordering principle, it would have a smallest element. We then show that this leads to a contradiction. Here is the proof.

theorem Theorem_6_4_5 :
     ¬∃ (q p : Nat), p * p = 2 * (q * q) ∧ q ≠ 0 := by
   set S : Set Nat :=
diff --git a/docs/Chap7.html b/docs/Chap7.html
index dcca3a5..f68f104 100644
--- a/docs/Chap7.html
+++ b/docs/Chap7.html
@@ -238,7 +238,7 @@ 

In This Chapter

@@ -282,11 +282,11 @@

7.1. Greatest Com theorem dvd_a_of_dvd_b_mod {a b d : Nat} (h1 : d ∣ b) (h2 : d ∣ (a % b)) : d ∣ a := sorry

These theorems tell us that the gcd of a and b is the same as the gcd of b and a % b, which suggests that the following recursive definition should compute the gcd of a and b:

-
def **gcd:: (a b : Nat) : Nat :=
+
def gcd (a b : Nat) : Nat :=
   match b with
     | 0 => a
-    | n + 1 => gcd (n + 1) (a % (n + 1))
-

Unfortunately, Lean puts a red squiggle under gcd, and it displays in the Infoview a long error message that begins fail to show termination. What is Lean complaining about?

+ | n + 1 => **gcd (n + 1) (a % (n + 1))::
+

Unfortunately, Lean puts a red squiggle under gcd (n + 1) (a % (n + 1)), and it displays in the Infoview a long error message that begins fail to show termination. What is Lean complaining about?

The problem is that recursive definitions are dangerous. To understand the danger, consider the following recursive definition:

def loop (n : Nat) : Nat := loop (n + 1)

Suppose we try to use this definition to compute loop 3. The definition would lead us to perform the following calculation:

@@ -295,13 +295,13 @@

7.1. Greatest Com

Clearly this calculation will go on forever and will never produce an answer. So the definition of loop does not actually succeed in defining a function from Nat to Nat.

Lean insists that recursive definitions must avoid such nonterminating calculations. Why did it accept all of our previous recursive definitions? The reason is that in each case, the definition of the value of the function at a natural number n referred only to values of the function at numbers smaller than n. Since a decreasing list of natural numbers cannot go on forever, such definitions lead to calculations that are guaranteed to terminate.

-

What about our recursive definition of gcd a b? This function has two arguments, a and b, and when b = n + 1, the definition asks us to compute gcd (n + 1) (a % (n + 1)). The first argument here could actually be larger than the first argument in the value we are trying to compute, gcd a b. But the second argument will always be smaller, and that will suffice to guarantee that the calculation terminates. We can tell Lean to focus on the second argument by adding a termination_by clause to the end of our recursive definition:

+

What about our recursive definition of gcd a b? This function has two arguments, a and b, and when b = n + 1, the definition asks us to compute gcd (n + 1) (a % (n + 1)). The first argument here could actually be larger than the first argument in the value we are trying to compute, gcd a b. But the second argument will always be smaller, and that will suffice to guarantee that the calculation terminates. We can tell Lean to focus on the second argument b by adding a termination_by clause to the end of our recursive definition:

def gcd (a b : Nat) : Nat :=
   match b with
     | 0 => a
     | n + 1 => **gcd (n + 1) (a % (n + 1))::
-  termination_by gcd a b => b
-

Unfortunately, Lean still isn’t satisfied, but the error message this time is more helpful. The message says that Lean failed to prove termination, and at the end of the message it says that the goal it failed to prove is a % (n + 1) < Nat.succ n. Here Nat.succ n denotes the successor of n—that is, n + 1—so Lean was trying to prove that a % (n + 1) < n + 1, which is precisely what is needed to show that the second argument of gcd (n + 1) (a % (n + 1)) is smaller than the second argument of gcd a b when b = n + 1. We’ll need to provide a proof of this goal to convince Lean to accept our recursive definition. Fortunately, it’s not hard to prove:

+ termination_by b +

Unfortunately, Lean still isn’t satisfied, but the error message this time is more helpful. The message says that Lean failed to prove termination, and at the end of the message it says that the goal it failed to prove is a % (n + 1) < n + 1, which is precisely what is needed to show that the second argument of gcd (n + 1) (a % (n + 1)) is smaller than the second argument of gcd a b when b = n + 1. We’ll need to provide a proof of this goal to convince Lean to accept our recursive definition. Fortunately, it’s not hard to prove:

lemma mod_succ_lt (a n : Nat) : a % (n + 1) < n + 1 := by
   have h : n + 1 > 0 := Nat.succ_pos n
   show a % (n + 1) < n + 1 from Nat.mod_lt a h
@@ -313,7 +313,7 @@ 

7.1. Greatest Com | n + 1 => have : a % (n + 1) < n + 1 := mod_succ_lt a n gcd (n + 1) (a % (n + 1)) - termination_by gcd a b => b

+ termination_by b

Notice that in the have expression, we have not bothered to specify an identifier for the assertion being proven, since we never need to refer to it. Let’s try out our gcd function:

++#eval:: gcd 672 161    --Answer: 7.  Note 672 = 7 * 96 and 161 = 7 * 23.

To establish the main properties of gcd a b we’ll need several lemmas. We prove some of them and leave others as exercises.

@@ -378,23 +378,22 @@

7.1. Greatest Com def gcd_c1 (a b : Nat) : Int := match b with | 0 => 1 - | n + 1 => + | n + 1 => have : a % (n + 1) < n + 1 := mod_succ_lt a n gcd_c2 (n + 1) (a % (n + 1)) - --Corresponds to s = t' in (*) - - def gcd_c2 (a b : Nat) : Int := - match b with - | 0 => 0 - | n + 1 => - have : a % (n + 1) < n + 1 := mod_succ_lt a n - gcd_c1 (n + 1) (a % (n + 1)) - - (gcd_c2 (n + 1) (a % (n + 1))) * ↑(a / (n + 1)) - --Corresponds to t = s' - t'q in (*) -end - termination_by - gcd_c1 a b => b - gcd_c2 a b => b + --Corresponds to s = t' + termination_by b + + def gcd_c2 (a b : Nat) : Int := + match b with + | 0 => 0 + | n + 1 => + have : a % (n + 1) < n + 1 := mod_succ_lt a n + gcd_c1 (n + 1) (a % (n + 1)) - + (gcd_c2 (n + 1) (a % (n + 1))) * ↑(a / (n + 1)) + --Corresponds to t = s' - t'q + termination_by b +end

Notice that in the definition of gcd_c2, the quotient a / (n + 1) is computed using natural-number division, but it is then coerced to be an integer so that it can be multiplied by the integer gcd_c2 (n + 1) (a % (n + 1)).

Our main theorem about these functions is that they give the coefficients needed to write gcd a b as a linear combination of a and b. As usual, stating a few lemmas first helps with the proof. We leave the proofs of two of them as exercises for you (hint: imitate the proof of gcd_nonzero above).

lemma gcd_c1_base (a : Nat) : gcd_c1 a 0 = 1 := by rfl
@@ -444,10 +443,10 @@ 

7.1. Greatest Com
++#eval:: gcd_c1 672 161  --Answer: 6
 ++#eval:: gcd_c2 672 161  --Answer: -25
   --Note 6 * 672 - 25 * 161 = 4032 - 4025 = 7 = gcd 672 161
-

Finally, we turn to Theorem 7.1.6 in HTPI, which expresses one of the senses in which gcd a b is the greatest common divisor of a and b. Our proof follows the strategy of the proof in HTPI, with one additional step: we begin by using the theorem Int.coe_nat_dvd to change the goal from d ∣ gcd a b to ↑d ∣ ↑(gcd a b) (where the coercions are from Nat to Int), so that the rest of the proof can work with integer algebra rather than natural-number algebra.

+

Finally, we turn to Theorem 7.1.6 in HTPI, which expresses one of the senses in which gcd a b is the greatest common divisor of a and b. Our proof follows the strategy of the proof in HTPI, with one additional step: we begin by using the theorem Int.natCast_dvd_natCast to change the goal from d ∣ gcd a b to ↑d ∣ ↑(gcd a b) (where the coercions are from Nat to Int), so that the rest of the proof can work with integer algebra rather than natural-number algebra.

theorem Theorem_7_1_6 {d a b : Nat} (h1 : d ∣ a) (h2 : d ∣ b) :
     d ∣ gcd a b := by
-  rewrite [←Int.coe_nat_dvd]    --Goal : ↑d ∣ ↑(gcd a b)
+  rewrite [←Int.natCast_dvd_natCast]    --Goal : ↑d ∣ ↑(gcd a b)
   set s : Int := gcd_c1 a b
   set t : Int := gcd_c2 a b
   have h3 : s * ↑a + t * ↑b = ↑(gcd a b) := gcd_lin_comb b a
@@ -555,7 +554,7 @@ 

7.2. Prime Factorizati show p ∣ n from dvd_trans h7.right h8 done done

-

Of course, by the well ordering principle, an immediate consequence of this lemma is that every number greater than or equal to 2 has a smallest prime factor.

+

Of course, by the well-ordering principle, an immediate consequence of this lemma is that every number greater than or equal to 2 has a smallest prime factor.

lemma exists_least_prime_factor {n : Nat} (h : 2 ≤ n) :
     ∃ (p : Nat), prime_factor p n ∧
     ∀ (q : Nat), prime_factor q n → p ≤ q := by
@@ -659,7 +658,7 @@ 

7.2. Prime Factorizati List.length (a :: as) = Nat.succ (List.length as) @List.exists_cons_of_ne_nil : ∀ {α : Type u_1} {l : List α}, - l ≠ [] → ∃ (b : α), ∃ (L : List α), l = b :: L

+ l ≠ [] → ∃ (b : α) (L : List α), l = b :: L

And we’ll need one more lemma, which follows from the three theorems above; we leave the proof as an exercise for you:

lemma exists_cons_of_length_eq_succ {A : Type}
@@ -812,7 +811,7 @@ 

7.2. Prime Factorizati theorem Theorem_7_2_2 {a b c : Nat} (h1 : c ∣ a * b) (h2 : rel_prime a c) : c ∣ b := by - rewrite [←Int.coe_nat_dvd] --Goal : ↑c ∣ ↑b + rewrite [←Int.natCast_dvd_natCast] --Goal : ↑c ∣ ↑b define at h1; define at h2; define obtain (j : Nat) (h3 : a * b = c * j) from h1 set s : Int := gcd_c1 a c @@ -1871,7 +1870,7 @@

7.5. Public-Key Cr done

We will also need to use Lemma 7.4.5 from HTPI. To prove that lemma in Lean, we will use Theorem_7_2_2, which says that for natural numbers a, b, and c, if c ∣ a * b and c and a are relatively prime, then c ∣ b. But we will need to extend the theorem to allow b to be an integer rather than a natural number. To prove this extension, we use the Lean function Int.natAbs : Int → Nat, which computes the absolute value of an integer. Lean knows several theorems about this function:

-
@Int.coe_nat_dvd_left : ∀ {n : ℕ} {z : ℤ}, ↑n ∣ z ↔ n ∣ Int.natAbs z
+
@Int.natCast_dvd : ∀ {n : ℤ} {m : ℕ}, ↑m ∣ n ↔ m ∣ Int.natAbs n
 
 Int.natAbs_mul : ∀ (a b : ℤ),
                   Int.natAbs (a * b) = Int.natAbs a * Int.natAbs b
@@ -1881,9 +1880,9 @@ 

7.5. Public-Key Cr

With the help of these theorems, our extended version of Theorem_7_2_2 follows easily from the original version:

theorem Theorem_7_2_2_Int {a c : Nat} {b : Int}
     (h1 : ↑c ∣ ↑a * b) (h2 : rel_prime a c) : ↑c ∣ b := by
-  rewrite [Int.coe_nat_dvd_left, Int.natAbs_mul,
+  rewrite [Int.natCast_dvd, Int.natAbs_mul,
     Int.natAbs_ofNat] at h1        --h1 : c ∣ a * Int.natAbs b
-  rewrite [Int.coe_nat_dvd_left]   --Goal : c ∣ Int.natAbs b
+  rewrite [Int.natCast_dvd]        --Goal : c ∣ Int.natAbs b
   show c ∣ Int.natAbs b from Theorem_7_2_2 h1 h2
   done

With that preparation, we can now prove Lemma_7_4_5.

@@ -1951,44 +1950,43 @@

7.5. Public-Key Cr ring done have h8 : [m]_p = [0]_p := (cc_eq_iff_congr _ _ _).rtl h7 - have h9 : 0 < (e * d) := by + have h9 : e * d ≠ 0 := by rewrite [h2] - have h10 : 0 ≤ (p - 1) * s := Nat.zero_le _ - linarith - done - have h10 : (0 : Int) ^ (e * d) = 0 := zero_pow h9 - have h11 : [c ^ d]_p = [m]_p := - calc [c ^ d]_p - _ = [c]_p ^ d := by rw [Exercise_7_4_5_Nat] - _ = ([m]_p ^ e) ^ d := by rw [h5] - _ = [m]_p ^ (e * d) := by ring - _ = [0]_p ^ (e * d) := by rw [h8] - _ = [0 ^ (e * d)]_p := Exercise_7_4_5_Int _ _ _ - _ = [0]_p := by rw [h10] - _ = [m]_p := by rw [h8] - show c ^ d ≡ m (MOD p) from (cc_eq_iff_congr _ _ _).ltr h11 - done - · -- Case 2. h6 : ¬p ∣ m - have h7 : rel_prime m p := rel_prime_of_prime_not_dvd h1 h6 - have h8 : rel_prime p m := rel_prime_symm h7 - have h9 : NeZero p := prime_NeZero h1 - have h10 : (1 : Int) ^ s = 1 := by ring - have h11 : [c ^ d]_p = [m]_p := - calc [c ^ d]_p - _ = [c]_p ^ d := by rw [Exercise_7_4_5_Nat] - _ = ([m]_p ^ e) ^ d := by rw [h5] - _ = [m]_p ^ (e * d) := by ring - _ = [m]_p ^ ((p - 1) * s + 1) := by rw [h2] - _ = ([m]_p ^ (p - 1)) ^ s * [m]_p := by ring - _ = ([m]_p ^ (phi p)) ^ s * [m]_p := by rw [phi_prime h1] - _ = [1]_p ^ s * [m]_p := by rw [Theorem_7_4_2 h8] - _ = [1 ^ s]_p * [m]_p := by rw [Exercise_7_4_5_Int] - _ = [1]_p * [m]_p := by rw [h10] - _ = [m]_p * [1]_p := by ring - _ = [m]_p := Theorem_7_3_6_7 _ - show c ^ d ≡ m (MOD p) from (cc_eq_iff_congr _ _ _).ltr h11 - done - done

+ show (p - 1) * s + 1 ≠ 0 from Nat.add_one_ne_zero _ + done + have h10 : (0 : Int) ^ (e * d) = 0 := zero_pow h9 + have h11 : [c ^ d]_p = [m]_p := + calc [c ^ d]_p + _ = [c]_p ^ d := by rw [Exercise_7_4_5_Nat] + _ = ([m]_p ^ e) ^ d := by rw [h5] + _ = [m]_p ^ (e * d) := by ring + _ = [0]_p ^ (e * d) := by rw [h8] + _ = [0 ^ (e * d)]_p := Exercise_7_4_5_Int _ _ _ + _ = [0]_p := by rw [h10] + _ = [m]_p := by rw [h8] + show c ^ d ≡ m (MOD p) from (cc_eq_iff_congr _ _ _).ltr h11 + done + · -- Case 2. h6 : ¬p ∣ m + have h7 : rel_prime m p := rel_prime_of_prime_not_dvd h1 h6 + have h8 : rel_prime p m := rel_prime_symm h7 + have h9 : NeZero p := prime_NeZero h1 + have h10 : (1 : Int) ^ s = 1 := by ring + have h11 : [c ^ d]_p = [m]_p := + calc [c ^ d]_p + _ = [c]_p ^ d := by rw [Exercise_7_4_5_Nat] + _ = ([m]_p ^ e) ^ d := by rw [h5] + _ = [m]_p ^ (e * d) := by ring + _ = [m]_p ^ ((p - 1) * s + 1) := by rw [h2] + _ = ([m]_p ^ (p - 1)) ^ s * [m]_p := by ring + _ = ([m]_p ^ (phi p)) ^ s * [m]_p := by rw [phi_prime h1] + _ = [1]_p ^ s * [m]_p := by rw [Theorem_7_4_2 h8] + _ = [1 ^ s]_p * [m]_p := by rw [Exercise_7_4_5_Int] + _ = [1]_p * [m]_p := by rw [h10] + _ = [m]_p * [1]_p := by ring + _ = [m]_p := Theorem_7_3_6_7 _ + show c ^ d ≡ m (MOD p) from (cc_eq_iff_congr _ _ _).ltr h11 + done + done

Here, finally, is the proof of Theorem_7_5_1:

theorem Theorem_7_5_1 (p q n e d k m c : Nat)
     (p_prime : prime p) (q_prime : prime q) (p_ne_q : p ≠ q)
diff --git a/docs/Chap8.html b/docs/Chap8.html
index 34aaa63..09a0301 100644
--- a/docs/Chap8.html
+++ b/docs/Chap8.html
@@ -237,7 +237,7 @@ 

In This Chapter

@@ -804,7 +804,7 @@

8.1. Equinumerous Sets show ∃ (n : Nat), R n x from fcnl_exists h3.right.right h4 done done

-

For the proof of 2 → 3, suppose we have A : Set U and S : Rel Nat U, and the statement fcnl_onto_from_nat S A is true. We need to come up with a relation R : Rel U Nat for which we can prove fcnl_one_one_to_nat R A. You might be tempted to try R = invRel S, but there is a problem with this choice: if x ∈ A, there might be multiple natural numbers n such that S n x holds, but we must make sure that there is only one n for which R x n holds. Our solution to this problem will be to define R x n to mean that n is the smallest natural number for which S n x holds. (The proof in HTPI uses a similar idea.) The well ordering principle guarantees that there always is such a smallest element.

+

For the proof of 2 → 3, suppose we have A : Set U and S : Rel Nat U, and the statement fcnl_onto_from_nat S A is true. We need to come up with a relation R : Rel U Nat for which we can prove fcnl_one_one_to_nat R A. You might be tempted to try R = invRel S, but there is a problem with this choice: if x ∈ A, there might be multiple natural numbers n such that S n x holds, but we must make sure that there is only one n for which R x n holds. Our solution to this problem will be to define R x n to mean that n is the smallest natural number for which S n x holds. (The proof in HTPI uses a similar idea.) The well-ordering principle guarantees that there always is such a smallest element.

def least_rel_to {U : Type} (S : Rel Nat U) (x : U) (n : Nat) : Prop :=
   S n x ∧ ∀ (m : Nat), S m x → n ≤ m
 
@@ -965,7 +965,7 @@ 

8.1. Equinumerous Sets have h3 : fzn q1.num = fzn q2.num ∧ q1.den = q2.den := Prod.mk.inj h2 have h4 : q1.num = q2.num := fzn_one_one _ _ h3.left - show q1 = q2 from Rat.ext q1 q2 h4 h3.right + show q1 = q2 from Rat.ext h4 h3.right done lemma image_fqn_unbdd : diff --git a/docs/How-To-Prove-It-With-Lean.pdf b/docs/How-To-Prove-It-With-Lean.pdf index 39381a6..b36c755 100644 Binary files a/docs/How-To-Prove-It-With-Lean.pdf and b/docs/How-To-Prove-It-With-Lean.pdf differ diff --git a/docs/IntroLean.html b/docs/IntroLean.html index f9f323e..7ba756b 100644 --- a/docs/IntroLean.html +++ b/docs/IntroLean.html @@ -237,7 +237,7 @@

In This Chapter

diff --git a/docs/index.html b/docs/index.html index 008822c..880daf6 100644 --- a/docs/index.html +++ b/docs/index.html @@ -179,7 +179,7 @@

In This Chapter

diff --git a/docs/search.json b/docs/search.json index 4bd3f31..e43638d 100644 --- a/docs/search.json +++ b/docs/search.json @@ -4,7 +4,7 @@ "href": "index.html", "title": "How To Prove It With Lean", "section": "", - "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\bigtriangleup}\n$$" + "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\mathbin{∆}}\n$$" }, { "objectID": "index.html#about-this-book", @@ -74,7 +74,7 @@ "href": "IntroLean.html", "title": "Introduction to Lean", "section": "", - "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\bigtriangleup}\n$$\nIf you are reading this book in conjunction with How To Prove It, you should complete Section 3.2 of HTPI before reading this chapter. Once you have reached that point in HTPI, you are ready to start learning about Lean. In this chapter we’ll explain the basics of writing proofs in Lean and getting feedback from Lean." + "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\mathbin{∆}}\n$$\nIf you are reading this book in conjunction with How To Prove It, you should complete Section 3.2 of HTPI before reading this chapter. Once you have reached that point in HTPI, you are ready to start learning about Lean. In this chapter we’ll explain the basics of writing proofs in Lean and getting feedback from Lean." }, { "objectID": "IntroLean.html#a-first-example", @@ -109,7 +109,7 @@ "href": "Chap3.html", "title": "3  Proofs", "section": "", - "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\bigtriangleup}\n$$" + "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\mathbin{∆}}\n$$" }, { "objectID": "Chap3.html#proofs-involving-negations-and-conditionals", @@ -123,7 +123,7 @@ "href": "Chap3.html#proofs-involving-quantifiers", "title": "3  Proofs", "section": "3.3. Proofs Involving Quantifiers", - "text": "3.3. Proofs Involving Quantifiers\nIn the notation used in HTPI, if \\(P(x)\\) is a statement about \\(x\\), then \\(\\forall x\\, P(x)\\) means “for all \\(x\\), \\(P(x)\\),” and \\(\\exists x\\, P(x)\\) means “there exists at least one \\(x\\) such that \\(P(x)\\).” The letter \\(P\\) here does not stand for a proposition; it is only when it is applied to some object \\(x\\) that we get a proposition. We will say that \\(P\\) is a predicate, and when we apply \\(P\\) to an object \\(x\\) we get the proposition \\(P(x)\\). You might want to think of the predicate \\(P\\) as representing some property that an object might have, and the proposition \\(P(x)\\) asserts that \\(x\\) has that property.\nTo use a predicate in Lean, you must tell Lean the type of objects to which it applies. If U is a type, then Pred U is the type of predicates that apply to objects of type U. If P has type Pred U (that is, P is a predicate applying to objects of type U) and x has type U, then to apply P to x we just write P x (with a space but no parentheses). Thus, if we have P : Pred U and x : U, then P x is an expression of type Prop. That is, P x is a proposition, and its meaning is that x has the property represented by the predicate P.\nThere are a few differences between the way quantified statements are written in HTPI and the way they are written in Lean. First of all, when we apply a quantifier to a variable in Lean we will specify the type of the variable explicitly. Also, Lean requires that after specifying the variable and its type, you must put a comma before the proposition to which the quantifier is applied. Thus, if P has type Pred U, then to say that P holds for all objects of type U we would write ∀ (x : U), P x. Similarly, ∃ (x : U), P x is the proposition asserting that there exists at least one x of type U such that P x.\nAnd there is one more important difference between the way quantified statements are written in HTPI and Lean. In HTPI, a quantifier is interpreted as applying to as little as possible. Thus, \\(\\forall x\\, P(x) \\wedge Q(x)\\) is interpreted as \\((\\forall x\\, P(x)) \\wedge Q(x)\\); if you want the quantifier \\(\\forall x\\) to apply to the entire statement \\(P(x) \\wedge Q(x)\\) you must use parentheses and write \\(\\forall x(P(x) \\wedge Q(x))\\). The convention in Lean is exactly the opposite: a quantifier applies to as much as possible. Thus, Lean will interpret ∀ (x : U), P x ∧ Q x as meaning ∀ (x : U), (P x ∧ Q x). If you want the quantifier to apply to only P x, then you must use parentheses and write (∀ (x : U), P x) ∧ Q x.\nWith this preparation, we are ready to consider how to write proofs involving quantifiers in Lean. The most common way to prove a goal of the form ∀ (x : U), P x is to use the following strategy (HTPI p. 114):\n\nTo prove a goal of the form ∀ (x : U), P x:\n\nLet x stand for an arbitrary object of type U and prove P x. If the letter x is already being used in the proof to stand for something, then you must choose an unused variable, say y, to stand for the arbitrary object, and prove P y.\n\nTo do this in Lean, you should use the tactic fix x : U, which tells Lean to treat x as standing for some fixed but arbitrary object of type U. This has the following effect on the tactic state:\n\n\n>> ⋮\n⊢ ∀ (x : U), P x\n\n\n>> ⋮\nx : U\n⊢ P x\n\n\nTo use a given of the form ∀ (x : U), P x, we usually apply a rule of inference called universal instantiation, which is described by the following proof strategy (HTPI p. 121):\n\n\nTo use a given of the form ∀ (x : U), P x:\n\nYou may plug in any value of type U, say a, for x and use this given to conclude that P a is true.\n\nThis strategy says that if you have h : ∀ (x : U), P x and a : U, then you can infer P a. Indeed, in this situation Lean will recognize h a as a proof of P a. For example, you can write have h' : P a := h a in a Lean tactic-mode proof, and Lean will add h' : P a to the tactic state. Note that a here need not be simply a variable; it can be any expression denoting an object of type U.\nLet’s try these strategies out in a Lean proof. In Lean, if you don’t want to give a theorem a name, you can simply call it an example rather than a theorem, and then there is no need to give it a name. In the following theorem, you can enter the symbol ∀ by typing \\forall or \\all, and you can enter ∃ by typing \\exists or \\ex.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), P x → ¬Q x)\n (h2 : ∀ (x : U), Q x) :\n ¬∃ (x : U), P x := by\n \n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), P x → ¬Q x\nh2 : ∀ (x : U), Q x\n⊢ ¬∃ (x : U), P x\n\n\nTo use the givens h1 and h2, we will probably want to use universal instantiation. But to do that we would need an object of type U to plug in for x in h1 and h2, and there is no object of type U in the tactic state. So at this point, we can’t apply universal instantiation to h1 and h2. We should watch for an object of type U to come up in the course of the proof, and consider applying universal instantiation if one does. Until then, we turn our attention to the goal.\nThe goal is a negative statement, so we begin by reexpressing it as an equivalent positive statement, using a quantifier negation law. The tactic quant_neg applies a quantifier negation law to rewrite the goal. As with the other tactics for applying logical equivalences, you can write quant_neg at h if you want to apply a quantifier negation law to a given h. The effect of the tactic can be summarized as follows:\n\n\n\n\n\nquant_neg Tactic\n\n\n\n\n\n¬∀ (x : U), P x\nis changed to\n∃ (x : U), ¬P x\n\n\n¬∃ (x : U), P x\nis changed to\n∀ (x : U), ¬P x\n\n\n∀ (x : U), P x\nis changed to\n¬∃ (x : U), ¬P x\n\n\n∃ (x : U), P x\nis changed to\n¬∀ (x : U), ¬P x\n\n\n\n\nUsing the quant_neg tactic leads to the following result.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), P x → ¬Q x)\n (h2 : ∀ (x : U), Q x) :\n ¬∃ (x : U), P x := by\n quant_neg --Goal is now ∀ (x : U), ¬P x\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), P x → ¬Q x\nh2 : ∀ (x : U), Q x\n⊢ ∀ (x : U), ¬P x\n\n\nNow the goal starts with ∀, so we use the strategy above and introduce an arbitrary object of type U. Since the variable x occurs as a bound variable in several statements in this theorem, it might be best to use a different letter for the arbitrary object; this isn’t absolutely necessary, but it may help to avoid confusion. So our next tactic is fix y : U.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), P x → ¬Q x)\n (h2 : ∀ (x : U), Q x) :\n ¬∃ (x : U), P x := by\n quant_neg --Goal is now ∀ (x : U), ¬P x\n fix y : U\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), P x → ¬Q x\nh2 : ∀ (x : U), Q x\ny : U\n⊢ ¬P y\n\n\nNow we have an object of type U in the tactic state, namely, y. So let’s try applying universal instantiation to h1 and h2 and see if it helps.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), P x → ¬Q x)\n (h2 : ∀ (x : U), Q x) :\n ¬∃ (x : U), P x := by\n quant_neg --Goal is now ∀ (x : U), ¬P x\n fix y : U\n have h3 : P y → ¬Q y := h1 y\n have h4 : Q y := h2 y\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), P x → ¬Q x\nh2 : ∀ (x : U), Q x\ny : U\nh3 : P y → ¬Q y\nh4 : Q y\n⊢ ¬P y\n\n\nWe’re almost done, because the goal now follows easily from h3 and h4. If we use the contrapositive law to rewrite h3 as Q y → ¬P y, then we can apply modus ponens to the rewritten h3 and h4 to reach the goal:\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), P x → ¬Q x)\n (h2 : ∀ (x : U), Q x) :\n ¬∃ (x : U), P x := by\n quant_neg --Goal is now ∀ (x : U), ¬P x\n fix y : U\n have h3 : P y → ¬Q y := h1 y\n have h4 : Q y := h2 y\n contrapos at h3 --Now h3 : Q y → ¬P y\n show ¬P y from h3 h4\n done\n\n\nNo goals\n\n\nOur next example is a theorem of set theory. You already know how to type a few set theory symbols in Lean, but you’ll need a few more for our next example. Here’s a summary of the most important set theory symbols and how to type them in Lean.\n\n\n\n\nSymbol\nHow To Type It\n\n\n\n\n∈\n\\in\n\n\n∉\n\\notin or \\inn\n\n\n⊆\n\\sub\n\n\n⊈\n\\subn\n\n\n=\n=\n\n\n≠\n\\ne\n\n\n∪\n\\union or \\cup\n\n\n∩\n\\inter or \\cap\n\n\n\\\n\\\\\n\n\n△\n\\bigtriangleup\n\n\n∅\n\\emptyset\n\n\n𝒫\n\\powerset\n\n\n\n\nWith this preparation, we can turn to our next example.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n \n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ∀ x ∈ A, x ∉ B\n⊢ A ⊆ C\n\n\nNotice that in the Infoview, Lean has written h2 as ∀ x ∈ A, x ∉ B, using a bounded quantifier. As explained in Section 2.2 of HTPI (see p. 72), this is a shorter way of writing the statement ∀ (x : U), x ∈ A → x ∉ B. We begin by using the define tactic to write out the definition of the goal.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n define --Goal : ∀ ⦃a : U⦄, a ∈ A → a ∈ C\n **done::\n\n\nU : Type\nA B C: Set U\nh1 : A ⊆ B ∪ C\nh2 : ∀ x ∈ A, x ∉ B\n⊢ ∀ ⦃a : U⦄,\n>> a ∈ A → a ∈ C\n\n\nNotice that Lean’s definition of the goal starts with ∀ ⦃a : U⦄, not ∀ (a : U). Why did Lean use those funny double braces rather than parentheses? We’ll return to that question shortly. The difference doesn’t affect our next steps, which are to introduce an arbitrary object y of type U and assume y ∈ A.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n define --Goal : ∀ ⦃a : U⦄, a ∈ A → a ∈ C\n fix y : U\n assume h3 : y ∈ A\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ∀ x ∈ A, x ∉ B\ny : U\nh3 : y ∈ A\n⊢ y ∈ C\n\n\nNow we can combine h2 and h3 to conclude that y ∉ B. Since we have y : U, by universal instantiation, h2 y is a proof of y ∈ A → y ∉ B, and therefore by modus ponens, h2 y h3 is a proof of y ∉ B.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n define --Goal : ∀ ⦃a : U⦄, a ∈ A → a ∈ C\n fix y : U\n assume h3 : y ∈ A\n have h4 : y ∉ B := h2 y h3\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ∀ x ∈ A, x ∉ B\ny : U\nh3 : y ∈ A\nh4 : y ∉ B\n⊢ y ∈ C\n\n\nWe should be able to use similar reasoning to combine h1 and h3, if we first write out the definition of h1.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n define --Goal : ∀ ⦃a : U⦄, a ∈ A → a ∈ C\n fix y : U\n assume h3 : y ∈ A\n have h4 : y ∉ B := h2 y h3\n define at h1 --h1 : ∀ ⦃a : U⦄, a ∈ A → a ∈ B ∪ C\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : ∀ ⦃a : U⦄,\n>> a ∈ A → a ∈ B ∪ C\nh2 : ∀ x ∈ A, x ∉ B\ny : U\nh3 : y ∈ A\nh4 : y ∉ B\n⊢ y ∈ C\n\n\nOnce again, Lean has used double braces to define h1, and now we are ready to explain what they mean. If the definition had been h1 : ∀ (a : U), a ∈ A → a ∈ B ∪ C, then exactly as in the previous step, h1 y h3 would be a proof of y ∈ B ∪ C. The use of double braces in the definition h1 : ∀ ⦃a : U⦄, a ∈ A → a ∈ B ∪ C means that you don’t need to tell Lean that y is being plugged in for a in the universal instantiation step; Lean will figure that out on its own. Thus, you can just write h1 h3 as a proof of y ∈ B ∪ C. Indeed, if you write h1 y h3 then you will get an error message, because Lean expects not to be told what to plug in for a. You might think of the definition of h1 as meaning h1 : _ ∈ A → _ ∈ B ∪ C, where the blanks can be filled in with anything of type U (with the same thing being put in both blanks). When you ask Lean to apply modus ponens by combining this statement with h3 : y ∈ A, Lean figures out that in order for modus ponens to apply, the blanks must be filled in with y.\nIn this situation, the a in h1 is called an implicit argument. What this means is that, when h1 is applied to make an inference in a proof, the value to be assigned to a is not specified explicitly; rather, the value is inferred by Lean. We will see many more examples of implicit arguments later in this book. In fact, there are two slightly different kinds of implicit arguments in Lean. One kind is indicated using the double braces ⦃ ⦄ used in this example, and the other is indicated using curly braces, { }. The difference between these two kinds of implicit arguments won’t be important in this book; all that will matter to us is that if you see either ∀ ⦃a : U⦄ or ∀ {a : U} rather than ∀ (a : U), then you must remember that a is an implicit argument.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n define --Goal : ∀ ⦃a : U⦄, a ∈ A → a ∈ C\n fix y : U\n assume h3 : y ∈ A\n have h4 : y ∉ B := h2 y h3\n define at h1 --h1 : ∀ ⦃a : U⦄, a ∈ A → a ∈ B ∪ C\n have h5 : y ∈ B ∪ C := h1 h3\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : ∀ ⦃a : U⦄,\n>> a ∈ A → a ∈ B ∪ C\nh2 : ∀ x ∈ A, x ∉ B\ny : U\nh3 : y ∈ A\nh4 : y ∉ B\nh5 : y ∈ B ∪ C\n⊢ y ∈ C\n\n\nIf Lean was able to figure out that y should be plugged in for a in h1 in this step, couldn’t it have figured out that y should be plugged in for x in h2 in the previous have step? The answer is yes. Of course, in h2, x was not an implicit argument, so Lean wouldn’t automatically figure out what to plug in for x. But we could have asked it to figure it out by writing the proof in the previous step as h2 _ h3 rather than h2 y h3. In a term-mode proof, an underscore represents a blank to be filled in by Lean. Try changing the earlier step of the proof to have h4 : y ∉ B := h2 _ h3 and you will see that Lean will accept it. Of course, in this case this doesn’t save us any typing, but in some situations it is useful to let Lean figure out some part of a proof.\nLean’s ability to fill in blanks in term-mode proofs is limited. For example, if you try changing the previous step to have h4 : y ∉ B := h2 y _, you’ll get a red squiggle under the blank, and the error message in the Infoview pane will say don't know how to synthesize placeholder. In other words, Lean was unable to figure out how to fill in the blank in this case. In future proofs you might try replacing some expressions with blanks to get a feel for what Lean can and cannot figure out for itself.\nContinuing with the proof, we see that we’re almost done, because we can combine h4 and h5 to reach our goal. To see how, we first write out the definition of h5.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n define --Goal : ∀ ⦃a : U⦄, a ∈ A → a ∈ C\n fix y : U\n assume h3 : y ∈ A\n have h4 : y ∉ B := h2 y h3\n define at h1 --h1 : ∀ ⦃a : U⦄, a ∈ A → a ∈ B ∪ C\n have h5 : y ∈ B ∪ C := h1 h3\n define at h5 --h5 : y ∈ B ∨ y ∈ C\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : ∀ ⦃a : U⦄,\n>> a ∈ A → a ∈ B ∪ C\nh2 : ∀ x ∈ A, x ∉ B\ny : U\nh3 : y ∈ A\nh4 : y ∉ B\nh5 : y ∈ B ∨ y ∈ C\n⊢ y ∈ C\n\n\nA conditional law will convert h5 to y ∉ B → y ∈ C, and then modus ponens with h4 will complete the proof.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n define --Goal : ∀ ⦃a : U⦄, a ∈ A → a ∈ C\n fix y : U\n assume h3 : y ∈ A\n have h4 : y ∉ B := h2 y h3\n define at h1 --h1 : ∀ ⦃a : U⦄, a ∈ A → a ∈ B ∪ C\n have h5 : y ∈ B ∪ C := h1 h3\n define at h5 --h5 : y ∈ B ∨ y ∈ C\n conditional at h5 --h5 : y ∉ B → y ∈ C\n show y ∈ C from h5 h4\n done\n\n\nNo goals\n\n\nNext we turn to strategies for working with existential quantifiers (HTPI p. 118).\n\n\nTo prove a goal of the form ∃ (x : U), P x:\n\nFind a value of x, say a, for which you think P a is true, and prove P a.\n\nThis strategy is based on the fact that if you have a : U and h : P a, then you can infer ∃ (x : U), P x. Indeed, in this situation the expression Exists.intro a h is a Lean term-mode proof of ∃ (x : U), P x. The name Exists.intro indicates that this is a rule for introducing an existential quantifier.\nNote that, as with the universal instantiation rule, a here can be any expression denoting an object of type U; it need not be simply a variable. For example, if A and B have type Set U, F has type Set (Set U), and you have a given h : A ∪ B ∈ F, then Exists.intro (A ∪ B) h is a proof of ∃ (x : Set U), x ∈ F.\nAs suggested by the strategy above, we will often want to use the Exists.intro rule in situations in which our goal is ∃ (x : U), P x and we have an object a of type U that we think makes P a true, but we don’t yet have a proof of P a. In that situation we can use the tactic apply Exists.intro a _. Recall that the apply tactic asks Lean to figure out what to put in the blank to turn Exists.intro a _ into a proof of the goal. Lean will figure out that what needs to go in the blank is a proof of P a, so it sets P a to be the goal. In other words, the tactic apply Exists.intro a _ has the following effect on the tactic state:\n\n\n>> ⋮\na : U\n⊢ ∃ (x : U), P x\n\n\n>> ⋮\na : U\n⊢ P a\n\n\nOur strategy for using an existential given is a rule that is called existential instantiation in HTPI (HTPI p. 120):\n\n\nTo use a given of the form ∃ (x : U), P x:\n\nIntroduce a new variable, say u, into the proof to stand for an object of type U for which P u is true.\n\nSuppose that, in a Lean proof, you have h : ∃ (x : U), P x. To apply the existential instantiation rule, you would use the tactic obtain (u : U) (h' : P u) from h. This tactic introduces into the tactic state both a new variable u of type U and also the identifier h' for the new given P u. Note that h can be any proof of a statement of the form ∃ (x : U), P x; it need not be just a single identifier.\nOften, if your goal is an existential statement ∃ (x : U), P x, you won’t be able to use the strategy above for existential goals right away, because you won’t know what object a to use in the tactic apply Exists.intro a _. You may have to wait until a likely candidate for a pops up in the course of the proof. On the other hand, it is usually best to use the obtain tactic right away if you have an existential given. This is illustrated in our next example.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n \n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), ∃ (y : U),\n>> P x → ¬Q y\nh2 : ∃ (x : U), ∀ (y : U),\n>> P x → Q y\n⊢ ∃ (x : U), ¬P x\n\n\nThe goal is the existential statement ∃ (x : U), ¬P x, and our strategy for existential goals says that we should try to find an object a of type U that we think would make the statement ¬P a true. But we don’t have any objects of type U in the tactic state, so it looks like we can’t use that strategy yet. Similarly, we can’t use the given h1 yet, since we have nothing to plug in for x in h1. However, h2 is an existential given, and we can use it right away.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n obtain (a : U)\n (h3 : ∀ (y : U), P a → Q y) from h2\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), ∃ (y : U),\n>> P x → ¬Q y\nh2 : ∃ (x : U), ∀ (y : U),\n>> P x → Q y\na : U\nh3 : ∀ (y : U), P a → Q y\n⊢ ∃ (x : U), ¬P x\n\n\nNow that we have a : U, we can apply universal instantiation to h1, plugging in a for x.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n obtain (a : U)\n (h3 : ∀ (y : U), P a → Q y) from h2\n have h4 : ∃ (y : U), P a → ¬ Q y := h1 a\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), ∃ (y : U),\n>> P x → ¬Q y\nh2 : ∃ (x : U), ∀ (y : U),\n>> P x → Q y\na : U\nh3 : ∀ (y : U), P a → Q y\nh4 : ∃ (y : U), P a → ¬Q y\n⊢ ∃ (x : U), ¬P x\n\n\nBy the way, this is another case in which Lean could have figured out a part of the proof on its own. Try changing h1 a in the last step to h1 _, and you’ll see that Lean will be able to figure out how to fill in the blank.\nOur new given h4 is another existential statement, so again we use it right away to introduce another object of type U. Since this object might not be the same as a, we must give it a different name. (Indeed, if you try to use the name a again, Lean will give you an error message.)\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n obtain (a : U)\n (h3 : ∀ (y : U), P a → Q y) from h2\n have h4 : ∃ (y : U), P a → ¬ Q y := h1 a\n obtain (b : U) (h5 : P a → ¬ Q b) from h4\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), ∃ (y : U),\n>> P x → ¬Q y\nh2 : ∃ (x : U), ∀ (y : U),\n>> P x → Q y\na : U\nh3 : ∀ (y : U), P a → Q y\nh4 : ∃ (y : U), P a → ¬Q y\nb : U\nh5 : P a → ¬Q b\n⊢ ∃ (x : U), ¬P x\n\n\nWe have not yet used h3. We could plug in either a or b for y in h3, but a little thought should show you that plugging in b is more useful.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n obtain (a : U)\n (h3 : ∀ (y : U), P a → Q y) from h2\n have h4 : ∃ (y : U), P a → ¬ Q y := h1 a\n obtain (b : U) (h5 : P a → ¬ Q b) from h4\n have h6 : P a → Q b := h3 b\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), ∃ (y : U),\n>> P x → ¬Q y\nh2 : ∃ (x : U), ∀ (y : U),\n>> P x → Q y\na : U\nh3 : ∀ (y : U), P a → Q y\nh4 : ∃ (y : U), P a → ¬Q y\nb : U\nh5 : P a → ¬Q b\nh6 : P a → Q b\n⊢ ∃ (x : U), ¬P x\n\n\nNow look at h5 and h6. They show that P a leads to contradictory conclusions, ¬Q b and Q b. This means that P a must be false. We finally know what value of x to use to prove the goal.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n obtain (a : U)\n (h3 : ∀ (y : U), P a → Q y) from h2\n have h4 : ∃ (y : U), P a → ¬ Q y := h1 a\n obtain (b : U) (h5 : P a → ¬ Q b) from h4\n have h6 : P a → Q b := h3 b\n apply Exists.intro a _\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), ∃ (y : U),\n>> P x → ¬Q y\nh2 : ∃ (x : U), ∀ (y : U),\n>> P x → Q y\na : U\nh3 : ∀ (y : U), P a → Q y\nh4 : ∃ (y : U), P a → ¬Q y\nb : U\nh5 : P a → ¬Q b\nh6 : P a → Q b\n⊢ ¬P a\n\n\nSince the goal is now a negative statement that cannot be reexpressed as a positive statement, we use proof by contradiction.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n obtain (a : U)\n (h3 : ∀ (y : U), P a → Q y) from h2\n have h4 : ∃ (y : U), P a → ¬ Q y := h1 a\n obtain (b : U) (h5 : P a → ¬ Q b) from h4\n have h6 : P a → Q b := h3 b\n apply Exists.intro a _\n by_contra h7\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), ∃ (y : U),\n>> P x → ¬Q y\nh2 : ∃ (x : U), ∀ (y : U),\n>> P x → Q y\na : U\nh3 : ∀ (y : U), P a → Q y\nh4 : ∃ (y : U), P a → ¬Q y\nb : U\nh5 : P a → ¬Q b\nh6 : P a → Q b\nh7 : P a\n⊢ False\n\n\nNow h5 h7 is a proof of ¬Q b and h6 h7 is a proof of Q b, so h5 h7 (h6 h7) is a proof of False.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n obtain (a : U)\n (h3 : ∀ (y : U), P a → Q y) from h2\n have h4 : ∃ (y : U), P a → ¬ Q y := h1 a\n obtain (b : U) (h5 : P a → ¬ Q b) from h4\n have h6 : P a → Q b := h3 b\n apply Exists.intro a _\n by_contra h7\n show False from h5 h7 (h6 h7)\n done\n\n\nNo goals\n\n\nWe conclude this section with the theorem from Example 3.3.5 in HTPI. That theorem concerns a union of a family of sets. In HTPI, such a union is written using a large union symbol, \\(\\bigcup\\). Lean uses the symbol ⋃₀, which is entered by typing \\U0 (that is, backslash–capital U–zero). For an intersection of a family of sets, Lean uses ⋂₀, typed as \\I0.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n \n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\n⊢ ⋃₀ F ⊆ B → F ⊆ 𝒫 B\n\n\nNote that F has type Set (Set U), which means that it is a set whose elements are sets of objects of type U. Since the goal is a conditional statement, we assume the antecedent and set the consequent as our goal. We’ll also write out the definition of the new goal.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\nh1 : ⋃₀ F ⊆ B\n⊢ ∀ ⦃a : Set U⦄,\n>> a ∈ F → a ∈ 𝒫 B\n\n\nBased on the form of the goal, we introduce an arbitrary object x of type Set U and assume x ∈ F. The new goal will be x ∈ 𝒫 B. The define tactic works out that this means x ⊆ B, which can be further expanded to ∀ ⦃a : U⦄, a ∈ x → a ∈ B.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n fix x : Set U\n assume h2 : x ∈ F\n define\n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\nh1 : ⋃₀ F ⊆ B\nx : Set U\nh2 : x ∈ F\n⊢ ∀ ⦃a : U⦄,\n>> a ∈ x → a ∈ B\n\n\nOnce again the form of the goal dictates our next steps: introduce an arbitrary y of type U and assume y ∈ x.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n fix x : Set U\n assume h2 : x ∈ F\n define\n fix y : U\n assume h3 : y ∈ x\n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\nh1 : ⋃₀ F ⊆ B\nx : Set U\nh2 : x ∈ F\ny : U\nh3 : y ∈ x\n⊢ y ∈ B\n\n\nThe goal can be analyzed no further, so we turn to the givens. We haven’t used h1 yet. To see how to use it, we write out its definition.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n fix x : Set U\n assume h2 : x ∈ F\n define\n fix y : U\n assume h3 : y ∈ x\n define at h1\n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\nh1 : ∀ ⦃a : U⦄,\n>> a ∈ ⋃₀ F → a ∈ B\nx : Set U\nh2 : x ∈ F\ny : U\nh3 : y ∈ x\n⊢ y ∈ B\n\n\nNow we see that we can try to use h1 to reach our goal. Indeed, h1 _ would be a proof of the goal if we could fill in the blank with a proof of y ∈ ∪₀F. So we use the apply h1 _ tactic.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n fix x : Set U\n assume h2 : x ∈ F\n define\n fix y : U\n assume h3 : y ∈ x\n define at h1\n apply h1 _\n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\nh1 : ∀ ⦃a : U⦄,\n>> a ∈ ⋃₀ F → a ∈ B\nx : Set U\nh2 : x ∈ F\ny : U\nh3 : y ∈ x\n⊢ y ∈ ⋃₀ F\n\n\nOnce again we have a goal that can be analyzed by using the define tactic.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n fix x : Set U\n assume h2 : x ∈ F\n define\n fix y : U\n assume h3 : y ∈ x\n define at h1\n apply h1 _\n define\n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\nh1 : ∀ ⦃a : U⦄,\n>> a ∈ ⋃₀ F → a ∈ B\nx : Set U\nh2 : x ∈ F\ny : U\nh3 : y ∈ x\n⊢ ∃ t ∈ F, y ∈ t\n\n\nOur goal now is ∃ (t : Set U), t ∈ F ∧ y ∈ t, although once again Lean has used a bounded quantifier to write this in a shorter form. So we look for a value of t that will make the statement t ∈ F ∧ y ∈ t true. The givens h2 and h3 tell us that x is such a value, so as described earlier our next tactic should be apply Exists.intro x _.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n fix x : Set U\n assume h2 : x ∈ F\n define\n fix y : U\n assume h3 : y ∈ x\n define at h1\n apply h1 _\n define\n apply Exists.intro x _\n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\nh1 : ∀ ⦃a : U⦄,\n>> a ∈ ⋃₀ F → a ∈ B\nx : Set U\nh2 : x ∈ F\ny : U\nh3 : y ∈ x\n⊢ x ∈ F ∧ y ∈ x\n\n\nClearly the goal now follows from h2 and h3, but how do we write the proof in Lean? Since we need to introduce the “and” symbol ∧, you shouldn’t be surprised to learn that the rule we need is called And.intro. Proof strategies for statements involving “and” will be the subject of the next section.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n fix x : Set U\n assume h2 : x ∈ F\n define\n fix y : U\n assume h3 : y ∈ x\n define at h1\n apply h1 _\n define\n apply Exists.intro x _\n show x ∈ F ∧ y ∈ x from And.intro h2 h3\n done\n\n\nNo goals\n\n\nYou might want to compare the Lean proof above to the way the proof was written in HTPI. Here are the theorem and proof from HTPI (HTPI p. 125):\n\nSuppose \\(B\\) is a set and \\(\\mathcal{F}\\) is a family of sets. If \\(\\bigcup\\mathcal{F} \\subseteq B\\) then \\(\\mathcal{F} \\subseteq \\mathscr{P}(B)\\).\n\n\nProof. Suppose \\(\\bigcup \\mathcal{F} \\subseteq B\\). Let \\(x\\) be an arbitrary element of \\(\\mathcal{F}\\). Let \\(y\\) be an arbitrary element of \\(x\\). Since \\(y \\in x\\) and \\(x \\in \\mathcal{F}\\), by the definition of \\(\\bigcup \\mathcal{F}\\), \\(y \\in \\bigcup \\mathcal{F}\\). But then since \\(\\bigcup \\mathcal{F} \\subseteq B\\), \\(y \\in B\\). Since \\(y\\) was an arbitrary element of \\(x\\), we can conclude that \\(x \\subseteq B\\), so \\(x \\in \\mathscr{P}(B)\\). But \\(x\\) was an arbitrary element of \\(\\mathcal{F}\\), so this shows that \\(\\mathcal{F} \\subseteq \\mathscr{P}(B)\\), as required.  □\n\n\n\nExercises\n\ntheorem Exercise_3_3_1\n (U : Type) (P Q : Pred U) (h1 : ∃ (x : U), P x → Q x) :\n (∀ (x : U), P x) → ∃ (x : U), Q x := by\n \n **done::\n\n\ntheorem Exercise_3_3_8 (U : Type) (F : Set (Set U)) (A : Set U)\n (h1 : A ∈ F) : A ⊆ ⋃₀ F := by\n \n **done::\n\n\ntheorem Exercise_3_3_9 (U : Type) (F : Set (Set U)) (A : Set U)\n (h1 : A ∈ F) : ⋂₀ F ⊆ A := by\n \n **done::\n\n\ntheorem Exercise_3_3_10 (U : Type) (B : Set U) (F : Set (Set U))\n (h1 : ∀ (A : Set U), A ∈ F → B ⊆ A) : B ⊆ ⋂₀ F := by\n \n **done::\n\n\ntheorem Exercise_3_3_13 (U : Type)\n (F G : Set (Set U)) : F ⊆ G → ⋂₀ G ⊆ ⋂₀ F := by\n \n **done::" + "text": "3.3. Proofs Involving Quantifiers\nIn the notation used in HTPI, if \\(P(x)\\) is a statement about \\(x\\), then \\(\\forall x\\, P(x)\\) means “for all \\(x\\), \\(P(x)\\),” and \\(\\exists x\\, P(x)\\) means “there exists at least one \\(x\\) such that \\(P(x)\\).” The letter \\(P\\) here does not stand for a proposition; it is only when it is applied to some object \\(x\\) that we get a proposition. We will say that \\(P\\) is a predicate, and when we apply \\(P\\) to an object \\(x\\) we get the proposition \\(P(x)\\). You might want to think of the predicate \\(P\\) as representing some property that an object might have, and the proposition \\(P(x)\\) asserts that \\(x\\) has that property.\nTo use a predicate in Lean, you must tell Lean the type of objects to which it applies. If U is a type, then Pred U is the type of predicates that apply to objects of type U. If P has type Pred U (that is, P is a predicate applying to objects of type U) and x has type U, then to apply P to x we just write P x (with a space but no parentheses). Thus, if we have P : Pred U and x : U, then P x is an expression of type Prop. That is, P x is a proposition, and its meaning is that x has the property represented by the predicate P.\nThere are a few differences between the way quantified statements are written in HTPI and the way they are written in Lean. First of all, when we apply a quantifier to a variable in Lean we will specify the type of the variable explicitly. Also, Lean requires that after specifying the variable and its type, you must put a comma before the proposition to which the quantifier is applied. Thus, if P has type Pred U, then to say that P holds for all objects of type U we would write ∀ (x : U), P x. Similarly, ∃ (x : U), P x is the proposition asserting that there exists at least one x of type U such that P x.\nAnd there is one more important difference between the way quantified statements are written in HTPI and Lean. In HTPI, a quantifier is interpreted as applying to as little as possible. Thus, \\(\\forall x\\, P(x) \\wedge Q(x)\\) is interpreted as \\((\\forall x\\, P(x)) \\wedge Q(x)\\); if you want the quantifier \\(\\forall x\\) to apply to the entire statement \\(P(x) \\wedge Q(x)\\) you must use parentheses and write \\(\\forall x(P(x) \\wedge Q(x))\\). The convention in Lean is exactly the opposite: a quantifier applies to as much as possible. Thus, Lean will interpret ∀ (x : U), P x ∧ Q x as meaning ∀ (x : U), (P x ∧ Q x). If you want the quantifier to apply to only P x, then you must use parentheses and write (∀ (x : U), P x) ∧ Q x.\nWith this preparation, we are ready to consider how to write proofs involving quantifiers in Lean. The most common way to prove a goal of the form ∀ (x : U), P x is to use the following strategy (HTPI p. 114):\n\nTo prove a goal of the form ∀ (x : U), P x:\n\nLet x stand for an arbitrary object of type U and prove P x. If the letter x is already being used in the proof to stand for something, then you must choose an unused variable, say y, to stand for the arbitrary object, and prove P y.\n\nTo do this in Lean, you should use the tactic fix x : U, which tells Lean to treat x as standing for some fixed but arbitrary object of type U. This has the following effect on the tactic state:\n\n\n>> ⋮\n⊢ ∀ (x : U), P x\n\n\n>> ⋮\nx : U\n⊢ P x\n\n\nTo use a given of the form ∀ (x : U), P x, we usually apply a rule of inference called universal instantiation, which is described by the following proof strategy (HTPI p. 121):\n\n\nTo use a given of the form ∀ (x : U), P x:\n\nYou may plug in any value of type U, say a, for x and use this given to conclude that P a is true.\n\nThis strategy says that if you have h : ∀ (x : U), P x and a : U, then you can infer P a. Indeed, in this situation Lean will recognize h a as a proof of P a. For example, you can write have h' : P a := h a in a Lean tactic-mode proof, and Lean will add h' : P a to the tactic state. Note that a here need not be simply a variable; it can be any expression denoting an object of type U.\nLet’s try these strategies out in a Lean proof. In Lean, if you don’t want to give a theorem a name, you can simply call it an example rather than a theorem, and then there is no need to give it a name. In the following example, you can enter the symbol ∀ by typing \\forall or \\all, and you can enter ∃ by typing \\exists or \\ex.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), P x → ¬Q x)\n (h2 : ∀ (x : U), Q x) :\n ¬∃ (x : U), P x := by\n \n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), P x → ¬Q x\nh2 : ∀ (x : U), Q x\n⊢ ¬∃ (x : U), P x\n\n\nTo use the givens h1 and h2, we will probably want to use universal instantiation. But to do that we would need an object of type U to plug in for x in h1 and h2, and there is no object of type U in the tactic state. So at this point, we can’t apply universal instantiation to h1 and h2. We should watch for an object of type U to come up in the course of the proof, and consider applying universal instantiation if one does. Until then, we turn our attention to the goal.\nThe goal is a negative statement, so we begin by reexpressing it as an equivalent positive statement, using a quantifier negation law. The tactic quant_neg applies a quantifier negation law to rewrite the goal. As with the other tactics for applying logical equivalences, you can write quant_neg at h if you want to apply a quantifier negation law to a given h. The effect of the tactic can be summarized as follows:\n\n\n\n\n\nquant_neg Tactic\n\n\n\n\n\n¬∀ (x : U), P x\nis changed to\n∃ (x : U), ¬P x\n\n\n¬∃ (x : U), P x\nis changed to\n∀ (x : U), ¬P x\n\n\n∀ (x : U), P x\nis changed to\n¬∃ (x : U), ¬P x\n\n\n∃ (x : U), P x\nis changed to\n¬∀ (x : U), ¬P x\n\n\n\n\nUsing the quant_neg tactic leads to the following result.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), P x → ¬Q x)\n (h2 : ∀ (x : U), Q x) :\n ¬∃ (x : U), P x := by\n quant_neg --Goal is now ∀ (x : U), ¬P x\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), P x → ¬Q x\nh2 : ∀ (x : U), Q x\n⊢ ∀ (x : U), ¬P x\n\n\nNow the goal starts with ∀, so we use the strategy above and introduce an arbitrary object of type U. Since the variable x occurs as a bound variable in several statements in this theorem, it might be best to use a different letter for the arbitrary object; this isn’t absolutely necessary, but it may help to avoid confusion. So our next tactic is fix y : U.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), P x → ¬Q x)\n (h2 : ∀ (x : U), Q x) :\n ¬∃ (x : U), P x := by\n quant_neg --Goal is now ∀ (x : U), ¬P x\n fix y : U\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), P x → ¬Q x\nh2 : ∀ (x : U), Q x\ny : U\n⊢ ¬P y\n\n\nNow we have an object of type U in the tactic state, namely, y. So let’s try applying universal instantiation to h1 and h2 and see if it helps.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), P x → ¬Q x)\n (h2 : ∀ (x : U), Q x) :\n ¬∃ (x : U), P x := by\n quant_neg --Goal is now ∀ (x : U), ¬P x\n fix y : U\n have h3 : P y → ¬Q y := h1 y\n have h4 : Q y := h2 y\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), P x → ¬Q x\nh2 : ∀ (x : U), Q x\ny : U\nh3 : P y → ¬Q y\nh4 : Q y\n⊢ ¬P y\n\n\nWe’re almost done, because the goal now follows easily from h3 and h4. If we use the contrapositive law to rewrite h3 as Q y → ¬P y, then we can apply modus ponens to the rewritten h3 and h4 to reach the goal:\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), P x → ¬Q x)\n (h2 : ∀ (x : U), Q x) :\n ¬∃ (x : U), P x := by\n quant_neg --Goal is now ∀ (x : U), ¬P x\n fix y : U\n have h3 : P y → ¬Q y := h1 y\n have h4 : Q y := h2 y\n contrapos at h3 --Now h3 : Q y → ¬P y\n show ¬P y from h3 h4\n done\n\n\nNo goals\n\n\nOur next example is a theorem of set theory. You already know how to type a few set theory symbols in Lean, but you’ll need a few more for our next example. Here’s a summary of the most important set theory symbols and how to type them in Lean.\n\n\n\n\nSymbol\nHow To Type It\n\n\n\n\n∈\n\\in\n\n\n∉\n\\notin or \\inn\n\n\n⊆\n\\sub\n\n\n⊈\n\\subn\n\n\n=\n=\n\n\n≠\n\\ne\n\n\n∪\n\\union or \\cup\n\n\n∩\n\\inter or \\cap\n\n\n\\\n\\\\\n\n\n∆\n\\symmdiff\n\n\n∅\n\\emptyset\n\n\n𝒫\n\\powerset\n\n\n\n\nWith this preparation, we can turn to our next example.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n \n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ∀ x ∈ A, x ∉ B\n⊢ A ⊆ C\n\n\nNotice that in the Infoview, Lean has written h2 as ∀ x ∈ A, x ∉ B, using a bounded quantifier. As explained in Section 2.2 of HTPI (see p. 72), this is a shorter way of writing the statement ∀ (x : U), x ∈ A → x ∉ B. We begin by using the define tactic to write out the definition of the goal.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n define --Goal : ∀ ⦃a : U⦄, a ∈ A → a ∈ C\n **done::\n\n\nU : Type\nA B C: Set U\nh1 : A ⊆ B ∪ C\nh2 : ∀ x ∈ A, x ∉ B\n⊢ ∀ ⦃a : U⦄,\n>> a ∈ A → a ∈ C\n\n\nNotice that Lean’s definition of the goal starts with ∀ ⦃a : U⦄, not ∀ (a : U). Why did Lean use those funny double braces rather than parentheses? We’ll return to that question shortly. The difference doesn’t affect our next steps, which are to introduce an arbitrary object y of type U and assume y ∈ A.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n define --Goal : ∀ ⦃a : U⦄, a ∈ A → a ∈ C\n fix y : U\n assume h3 : y ∈ A\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ∀ x ∈ A, x ∉ B\ny : U\nh3 : y ∈ A\n⊢ y ∈ C\n\n\nNow we can combine h2 and h3 to conclude that y ∉ B. Since we have y : U, by universal instantiation, h2 y is a proof of y ∈ A → y ∉ B, and therefore by modus ponens, h2 y h3 is a proof of y ∉ B.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n define --Goal : ∀ ⦃a : U⦄, a ∈ A → a ∈ C\n fix y : U\n assume h3 : y ∈ A\n have h4 : y ∉ B := h2 y h3\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ∀ x ∈ A, x ∉ B\ny : U\nh3 : y ∈ A\nh4 : y ∉ B\n⊢ y ∈ C\n\n\nWe should be able to use similar reasoning to combine h1 and h3, if we first write out the definition of h1.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n define --Goal : ∀ ⦃a : U⦄, a ∈ A → a ∈ C\n fix y : U\n assume h3 : y ∈ A\n have h4 : y ∉ B := h2 y h3\n define at h1 --h1 : ∀ ⦃a : U⦄, a ∈ A → a ∈ B ∪ C\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : ∀ ⦃a : U⦄,\n>> a ∈ A → a ∈ B ∪ C\nh2 : ∀ x ∈ A, x ∉ B\ny : U\nh3 : y ∈ A\nh4 : y ∉ B\n⊢ y ∈ C\n\n\nOnce again, Lean has used double braces to define h1, and now we are ready to explain what they mean. If the definition had been h1 : ∀ (a : U), a ∈ A → a ∈ B ∪ C, then exactly as in the previous step, h1 y h3 would be a proof of y ∈ B ∪ C. The use of double braces in the definition h1 : ∀ ⦃a : U⦄, a ∈ A → a ∈ B ∪ C means that you don’t need to tell Lean that y is being plugged in for a in the universal instantiation step; Lean will figure that out on its own. Thus, you can just write h1 h3 as a proof of y ∈ B ∪ C. Indeed, if you write h1 y h3 then you will get an error message, because Lean expects not to be told what to plug in for a. You might think of the definition of h1 as meaning h1 : _ ∈ A → _ ∈ B ∪ C, where the blanks can be filled in with anything of type U (with the same thing being put in both blanks). When you ask Lean to apply modus ponens by combining this statement with h3 : y ∈ A, Lean figures out that in order for modus ponens to apply, the blanks must be filled in with y.\nIn this situation, the a in h1 is called an implicit argument. What this means is that, when h1 is applied to make an inference in a proof, the value to be assigned to a is not specified explicitly; rather, the value is inferred by Lean. We will see many more examples of implicit arguments later in this book. In fact, there are two slightly different kinds of implicit arguments in Lean. One kind is indicated using the double braces ⦃ ⦄ used in this example, and the other is indicated using curly braces, { }. The difference between these two kinds of implicit arguments won’t be important in this book; all that will matter to us is that if you see either ∀ ⦃a : U⦄ or ∀ {a : U} rather than ∀ (a : U), then you must remember that a is an implicit argument.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n define --Goal : ∀ ⦃a : U⦄, a ∈ A → a ∈ C\n fix y : U\n assume h3 : y ∈ A\n have h4 : y ∉ B := h2 y h3\n define at h1 --h1 : ∀ ⦃a : U⦄, a ∈ A → a ∈ B ∪ C\n have h5 : y ∈ B ∪ C := h1 h3\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : ∀ ⦃a : U⦄,\n>> a ∈ A → a ∈ B ∪ C\nh2 : ∀ x ∈ A, x ∉ B\ny : U\nh3 : y ∈ A\nh4 : y ∉ B\nh5 : y ∈ B ∪ C\n⊢ y ∈ C\n\n\nIf Lean was able to figure out that y should be plugged in for a in h1 in this step, couldn’t it have figured out that y should be plugged in for x in h2 in the previous have step? The answer is yes. Of course, in h2, x was not an implicit argument, so Lean wouldn’t automatically figure out what to plug in for x. But we could have asked it to figure it out by writing the proof in the previous step as h2 _ h3 rather than h2 y h3. In a term-mode proof, an underscore represents a blank to be filled in by Lean. Try changing the earlier step of the proof to have h4 : y ∉ B := h2 _ h3 and you will see that Lean will accept it. Of course, in this case this doesn’t save us any typing, but in some situations it is useful to let Lean figure out some part of a proof.\nLean’s ability to fill in blanks in term-mode proofs is limited. For example, if you try changing the previous step to have h4 : y ∉ B := h2 y _, you’ll get a red squiggle under the blank, and the error message in the Infoview pane will say don't know how to synthesize placeholder. In other words, Lean was unable to figure out how to fill in the blank in this case. In future proofs you might try replacing some expressions with blanks to get a feel for what Lean can and cannot figure out for itself.\nContinuing with the proof, we see that we’re almost done, because we can combine h4 and h5 to reach our goal. To see how, we first write out the definition of h5.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n define --Goal : ∀ ⦃a : U⦄, a ∈ A → a ∈ C\n fix y : U\n assume h3 : y ∈ A\n have h4 : y ∉ B := h2 y h3\n define at h1 --h1 : ∀ ⦃a : U⦄, a ∈ A → a ∈ B ∪ C\n have h5 : y ∈ B ∪ C := h1 h3\n define at h5 --h5 : y ∈ B ∨ y ∈ C\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : ∀ ⦃a : U⦄,\n>> a ∈ A → a ∈ B ∪ C\nh2 : ∀ x ∈ A, x ∉ B\ny : U\nh3 : y ∈ A\nh4 : y ∉ B\nh5 : y ∈ B ∨ y ∈ C\n⊢ y ∈ C\n\n\nA conditional law will convert h5 to y ∉ B → y ∈ C, and then modus ponens with h4 will complete the proof.\n\n\nexample (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ∀ (x : U), x ∈ A → x ∉ B) : A ⊆ C := by\n define --Goal : ∀ ⦃a : U⦄, a ∈ A → a ∈ C\n fix y : U\n assume h3 : y ∈ A\n have h4 : y ∉ B := h2 y h3\n define at h1 --h1 : ∀ ⦃a : U⦄, a ∈ A → a ∈ B ∪ C\n have h5 : y ∈ B ∪ C := h1 h3\n define at h5 --h5 : y ∈ B ∨ y ∈ C\n conditional at h5 --h5 : y ∉ B → y ∈ C\n show y ∈ C from h5 h4\n done\n\n\nNo goals\n\n\nNext we turn to strategies for working with existential quantifiers (HTPI p. 118).\n\n\nTo prove a goal of the form ∃ (x : U), P x:\n\nFind a value of x, say a, for which you think P a is true, and prove P a.\n\nThis strategy is based on the fact that if you have a : U and h : P a, then you can infer ∃ (x : U), P x. Indeed, in this situation the expression Exists.intro a h is a Lean term-mode proof of ∃ (x : U), P x. The name Exists.intro indicates that this is a rule for introducing an existential quantifier.\nNote that, as with the universal instantiation rule, a here can be any expression denoting an object of type U; it need not be simply a variable. For example, if A and B have type Set U, F has type Set (Set U), and you have a given h : A ∪ B ∈ F, then Exists.intro (A ∪ B) h is a proof of ∃ (x : Set U), x ∈ F.\nAs suggested by the strategy above, we will often want to use the Exists.intro rule in situations in which our goal is ∃ (x : U), P x and we have an object a of type U that we think makes P a true, but we don’t yet have a proof of P a. In that situation we can use the tactic apply Exists.intro a _. Recall that the apply tactic asks Lean to figure out what to put in the blank to turn Exists.intro a _ into a proof of the goal. Lean will figure out that what needs to go in the blank is a proof of P a, so it sets P a to be the goal. In other words, the tactic apply Exists.intro a _ has the following effect on the tactic state:\n\n\n>> ⋮\na : U\n⊢ ∃ (x : U), P x\n\n\n>> ⋮\na : U\n⊢ P a\n\n\nOur strategy for using an existential given is a rule that is called existential instantiation in HTPI (HTPI p. 120):\n\n\nTo use a given of the form ∃ (x : U), P x:\n\nIntroduce a new variable, say u, into the proof to stand for an object of type U for which P u is true.\n\nSuppose that, in a Lean proof, you have h : ∃ (x : U), P x. To apply the existential instantiation rule, you would use the tactic obtain (u : U) (h' : P u) from h. This tactic introduces into the tactic state both a new variable u of type U and also the identifier h' for the new given P u. Note that h can be any proof of a statement of the form ∃ (x : U), P x; it need not be just a single identifier.\nOften, if your goal is an existential statement ∃ (x : U), P x, you won’t be able to use the strategy above for existential goals right away, because you won’t know what object a to use in the tactic apply Exists.intro a _. You may have to wait until a likely candidate for a pops up in the course of the proof. On the other hand, it is usually best to use the obtain tactic right away if you have an existential given. This is illustrated in our next example.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n \n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), ∃ (y : U),\n>> P x → ¬Q y\nh2 : ∃ (x : U), ∀ (y : U),\n>> P x → Q y\n⊢ ∃ (x : U), ¬P x\n\n\nThe goal is the existential statement ∃ (x : U), ¬P x, and our strategy for existential goals says that we should try to find an object a of type U that we think would make the statement ¬P a true. But we don’t have any objects of type U in the tactic state, so it looks like we can’t use that strategy yet. Similarly, we can’t use the given h1 yet, since we have nothing to plug in for x in h1. However, h2 is an existential given, and we can use it right away.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n obtain (a : U)\n (h3 : ∀ (y : U), P a → Q y) from h2\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), ∃ (y : U),\n>> P x → ¬Q y\nh2 : ∃ (x : U), ∀ (y : U),\n>> P x → Q y\na : U\nh3 : ∀ (y : U), P a → Q y\n⊢ ∃ (x : U), ¬P x\n\n\nNow that we have a : U, we can apply universal instantiation to h1, plugging in a for x.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n obtain (a : U)\n (h3 : ∀ (y : U), P a → Q y) from h2\n have h4 : ∃ (y : U), P a → ¬ Q y := h1 a\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), ∃ (y : U),\n>> P x → ¬Q y\nh2 : ∃ (x : U), ∀ (y : U),\n>> P x → Q y\na : U\nh3 : ∀ (y : U), P a → Q y\nh4 : ∃ (y : U), P a → ¬Q y\n⊢ ∃ (x : U), ¬P x\n\n\nBy the way, this is another case in which Lean could have figured out a part of the proof on its own. Try changing h1 a in the last step to h1 _, and you’ll see that Lean will be able to figure out how to fill in the blank.\nOur new given h4 is another existential statement, so again we use it right away to introduce another object of type U. Since this object might not be the same as a, we must give it a different name. (Indeed, if you try to use the name a again, Lean will give you an error message.)\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n obtain (a : U)\n (h3 : ∀ (y : U), P a → Q y) from h2\n have h4 : ∃ (y : U), P a → ¬ Q y := h1 a\n obtain (b : U) (h5 : P a → ¬ Q b) from h4\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), ∃ (y : U),\n>> P x → ¬Q y\nh2 : ∃ (x : U), ∀ (y : U),\n>> P x → Q y\na : U\nh3 : ∀ (y : U), P a → Q y\nh4 : ∃ (y : U), P a → ¬Q y\nb : U\nh5 : P a → ¬Q b\n⊢ ∃ (x : U), ¬P x\n\n\nWe have not yet used h3. We could plug in either a or b for y in h3, but a little thought should show you that plugging in b is more useful.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n obtain (a : U)\n (h3 : ∀ (y : U), P a → Q y) from h2\n have h4 : ∃ (y : U), P a → ¬ Q y := h1 a\n obtain (b : U) (h5 : P a → ¬ Q b) from h4\n have h6 : P a → Q b := h3 b\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), ∃ (y : U),\n>> P x → ¬Q y\nh2 : ∃ (x : U), ∀ (y : U),\n>> P x → Q y\na : U\nh3 : ∀ (y : U), P a → Q y\nh4 : ∃ (y : U), P a → ¬Q y\nb : U\nh5 : P a → ¬Q b\nh6 : P a → Q b\n⊢ ∃ (x : U), ¬P x\n\n\nNow look at h5 and h6. They show that P a leads to contradictory conclusions, ¬Q b and Q b. This means that P a must be false. We finally know what value of x to use to prove the goal.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n obtain (a : U)\n (h3 : ∀ (y : U), P a → Q y) from h2\n have h4 : ∃ (y : U), P a → ¬ Q y := h1 a\n obtain (b : U) (h5 : P a → ¬ Q b) from h4\n have h6 : P a → Q b := h3 b\n apply Exists.intro a _\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), ∃ (y : U),\n>> P x → ¬Q y\nh2 : ∃ (x : U), ∀ (y : U),\n>> P x → Q y\na : U\nh3 : ∀ (y : U), P a → Q y\nh4 : ∃ (y : U), P a → ¬Q y\nb : U\nh5 : P a → ¬Q b\nh6 : P a → Q b\n⊢ ¬P a\n\n\nSince the goal is now a negative statement that cannot be reexpressed as a positive statement, we use proof by contradiction.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n obtain (a : U)\n (h3 : ∀ (y : U), P a → Q y) from h2\n have h4 : ∃ (y : U), P a → ¬ Q y := h1 a\n obtain (b : U) (h5 : P a → ¬ Q b) from h4\n have h6 : P a → Q b := h3 b\n apply Exists.intro a _\n by_contra h7\n **done::\n\n\nU : Type\nP Q : Pred U\nh1 : ∀ (x : U), ∃ (y : U),\n>> P x → ¬Q y\nh2 : ∃ (x : U), ∀ (y : U),\n>> P x → Q y\na : U\nh3 : ∀ (y : U), P a → Q y\nh4 : ∃ (y : U), P a → ¬Q y\nb : U\nh5 : P a → ¬Q b\nh6 : P a → Q b\nh7 : P a\n⊢ False\n\n\nNow h5 h7 is a proof of ¬Q b and h6 h7 is a proof of Q b, so h5 h7 (h6 h7) is a proof of False.\n\n\nexample (U : Type) (P Q : Pred U)\n (h1 : ∀ (x : U), ∃ (y : U), P x → ¬ Q y)\n (h2 : ∃ (x : U), ∀ (y : U), P x → Q y) :\n ∃ (x : U), ¬P x := by\n obtain (a : U)\n (h3 : ∀ (y : U), P a → Q y) from h2\n have h4 : ∃ (y : U), P a → ¬ Q y := h1 a\n obtain (b : U) (h5 : P a → ¬ Q b) from h4\n have h6 : P a → Q b := h3 b\n apply Exists.intro a _\n by_contra h7\n show False from h5 h7 (h6 h7)\n done\n\n\nNo goals\n\n\nWe conclude this section with the theorem from Example 3.3.5 in HTPI. That theorem concerns a union of a family of sets. In HTPI, such a union is written using a large union symbol, \\(\\bigcup\\). Lean uses the symbol ⋃₀, which is entered by typing \\U0 (that is, backslash–capital U–zero). For an intersection of a family of sets, Lean uses ⋂₀, typed as \\I0.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n \n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\n⊢ ⋃₀ F ⊆ B → F ⊆ 𝒫 B\n\n\nNote that F has type Set (Set U), which means that it is a set whose elements are sets of objects of type U. Since the goal is a conditional statement, we assume the antecedent and set the consequent as our goal. We’ll also write out the definition of the new goal.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\nh1 : ⋃₀ F ⊆ B\n⊢ ∀ ⦃a : Set U⦄,\n>> a ∈ F → a ∈ 𝒫 B\n\n\nBased on the form of the goal, we introduce an arbitrary object x of type Set U and assume x ∈ F. The new goal will be x ∈ 𝒫 B. The define tactic works out that this means x ⊆ B, which can be further expanded to ∀ ⦃a : U⦄, a ∈ x → a ∈ B.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n fix x : Set U\n assume h2 : x ∈ F\n define\n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\nh1 : ⋃₀ F ⊆ B\nx : Set U\nh2 : x ∈ F\n⊢ ∀ ⦃a : U⦄,\n>> a ∈ x → a ∈ B\n\n\nOnce again the form of the goal dictates our next steps: introduce an arbitrary y of type U and assume y ∈ x.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n fix x : Set U\n assume h2 : x ∈ F\n define\n fix y : U\n assume h3 : y ∈ x\n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\nh1 : ⋃₀ F ⊆ B\nx : Set U\nh2 : x ∈ F\ny : U\nh3 : y ∈ x\n⊢ y ∈ B\n\n\nThe goal can be analyzed no further, so we turn to the givens. We haven’t used h1 yet. To see how to use it, we write out its definition.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n fix x : Set U\n assume h2 : x ∈ F\n define\n fix y : U\n assume h3 : y ∈ x\n define at h1\n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\nh1 : ∀ ⦃a : U⦄,\n>> a ∈ ⋃₀ F → a ∈ B\nx : Set U\nh2 : x ∈ F\ny : U\nh3 : y ∈ x\n⊢ y ∈ B\n\n\nNow we see that we can try to use h1 to reach our goal. Indeed, h1 _ would be a proof of the goal if we could fill in the blank with a proof of y ∈ ∪₀F. So we use the apply h1 _ tactic.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n fix x : Set U\n assume h2 : x ∈ F\n define\n fix y : U\n assume h3 : y ∈ x\n define at h1\n apply h1 _\n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\nh1 : ∀ ⦃a : U⦄,\n>> a ∈ ⋃₀ F → a ∈ B\nx : Set U\nh2 : x ∈ F\ny : U\nh3 : y ∈ x\n⊢ y ∈ ⋃₀ F\n\n\nOnce again we have a goal that can be analyzed by using the define tactic.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n fix x : Set U\n assume h2 : x ∈ F\n define\n fix y : U\n assume h3 : y ∈ x\n define at h1\n apply h1 _\n define\n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\nh1 : ∀ ⦃a : U⦄,\n>> a ∈ ⋃₀ F → a ∈ B\nx : Set U\nh2 : x ∈ F\ny : U\nh3 : y ∈ x\n⊢ ∃ t ∈ F, y ∈ t\n\n\nOur goal now is ∃ (t : Set U), t ∈ F ∧ y ∈ t, although once again Lean has used a bounded quantifier to write this in a shorter form. So we look for a value of t that will make the statement t ∈ F ∧ y ∈ t true. The givens h2 and h3 tell us that x is such a value, so as described earlier our next tactic should be apply Exists.intro x _.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n fix x : Set U\n assume h2 : x ∈ F\n define\n fix y : U\n assume h3 : y ∈ x\n define at h1\n apply h1 _\n define\n apply Exists.intro x _\n **done::\n\n\nU : Type\nB : Set U\nF : Set (Set U)\nh1 : ∀ ⦃a : U⦄,\n>> a ∈ ⋃₀ F → a ∈ B\nx : Set U\nh2 : x ∈ F\ny : U\nh3 : y ∈ x\n⊢ x ∈ F ∧ y ∈ x\n\n\nClearly the goal now follows from h2 and h3, but how do we write the proof in Lean? Since we need to introduce the “and” symbol ∧, you shouldn’t be surprised to learn that the rule we need is called And.intro. Proof strategies for statements involving “and” will be the subject of the next section.\n\n\ntheorem Example_3_3_5 (U : Type) (B : Set U)\n (F : Set (Set U)) : ⋃₀ F ⊆ B → F ⊆ 𝒫 B := by\n assume h1 : ⋃₀ F ⊆ B\n define\n fix x : Set U\n assume h2 : x ∈ F\n define\n fix y : U\n assume h3 : y ∈ x\n define at h1\n apply h1 _\n define\n apply Exists.intro x _\n show x ∈ F ∧ y ∈ x from And.intro h2 h3\n done\n\n\nNo goals\n\n\nYou might want to compare the Lean proof above to the way the proof was written in HTPI. Here are the theorem and proof from HTPI (HTPI p. 125):\n\nSuppose \\(B\\) is a set and \\(\\mathcal{F}\\) is a family of sets. If \\(\\bigcup\\mathcal{F} \\subseteq B\\) then \\(\\mathcal{F} \\subseteq \\mathscr{P}(B)\\).\n\n\nProof. Suppose \\(\\bigcup \\mathcal{F} \\subseteq B\\). Let \\(x\\) be an arbitrary element of \\(\\mathcal{F}\\). Let \\(y\\) be an arbitrary element of \\(x\\). Since \\(y \\in x\\) and \\(x \\in \\mathcal{F}\\), by the definition of \\(\\bigcup \\mathcal{F}\\), \\(y \\in \\bigcup \\mathcal{F}\\). But then since \\(\\bigcup \\mathcal{F} \\subseteq B\\), \\(y \\in B\\). Since \\(y\\) was an arbitrary element of \\(x\\), we can conclude that \\(x \\subseteq B\\), so \\(x \\in \\mathscr{P}(B)\\). But \\(x\\) was an arbitrary element of \\(\\mathcal{F}\\), so this shows that \\(\\mathcal{F} \\subseteq \\mathscr{P}(B)\\), as required.  □\n\n\n\nExercises\n\ntheorem Exercise_3_3_1\n (U : Type) (P Q : Pred U) (h1 : ∃ (x : U), P x → Q x) :\n (∀ (x : U), P x) → ∃ (x : U), Q x := by\n \n **done::\n\n\ntheorem Exercise_3_3_8 (U : Type) (F : Set (Set U)) (A : Set U)\n (h1 : A ∈ F) : A ⊆ ⋃₀ F := by\n \n **done::\n\n\ntheorem Exercise_3_3_9 (U : Type) (F : Set (Set U)) (A : Set U)\n (h1 : A ∈ F) : ⋂₀ F ⊆ A := by\n \n **done::\n\n\ntheorem Exercise_3_3_10 (U : Type) (B : Set U) (F : Set (Set U))\n (h1 : ∀ (A : Set U), A ∈ F → B ⊆ A) : B ⊆ ⋂₀ F := by\n \n **done::\n\n\ntheorem Exercise_3_3_13 (U : Type)\n (F G : Set (Set U)) : F ⊆ G → ⋂₀ G ⊆ ⋂₀ F := by\n \n **done::" }, { "objectID": "Chap3.html#proofs-involving-conjunctions-and-biconditionals", @@ -137,14 +137,14 @@ "href": "Chap3.html#proofs-involving-disjunctions", "title": "3  Proofs", "section": "3.5. Proofs Involving Disjunctions", - "text": "3.5. Proofs Involving Disjunctions\nA common proof method for dealing with givens or goals that are disjunctions is proof by cases. Here’s how it works (HTPI p. 143).\n\nTo use a given of the form P ∨ Q:\n\nBreak your proof into cases. For case 1, assume that P is true and use this assumption to prove the goal. For case 2, assume that Q is true and prove the goal.\n\nIn Lean, you can break a proof into cases by using the by_cases tactic. If you have a given h : P ∨ Q, then the tactic by_cases on h will break your proof into two cases. For the first case, the given h will be changed to h : P, and for the second, it will be changed to h : Q; the goal for both cases will be the same as the original goal. Thus, the effect of the by_cases on h tactic is as follows:\n\n\n>> ⋮\nh : P ∨ Q\n⊢ goal\n\n\ncase Case_1\n>> ⋮\nh : P\n⊢ goal\ncase Case_2\n>> ⋮\nh : Q\n⊢ goal\n\n\nNotice that the original given h : P ∨ Q gets replaced by h : P in case 1 and h : Q in case 2. This is usually what is most convenient, but if you write by_cases on h with h1, then the original given h will be preserved, and new givens h1 : P and h1 : Q will be added to cases 1 and 2, respectively. If you want different names for the new givens in the two cases, then use by_cases on h with h1, h2 to add the new given h1 : P in case 1 and h2 : Q in case 2.\nYou can follow by_cases on with any proof of a disjunction, even if that proof is not just a single identifier. In that cases you will want to add with to specify the identifier or identifiers to be used for the new assumptions in the two cases. Another variant is that you can use the tactic by_cases h : P to break your proof into two cases, with the new assumptions being h : P in case 1 and h : ¬P in case 2. In other words, the effect of by_cases h : P is the same as adding the new given h : P ∨ ¬P (which, of course, is a tautology) and then using the tactic by_cases on h.\nThere are several introduction rules that you can use in Lean to prove a goal of the form P ∨ Q. If you have h : P, then Lean will accept Or.intro_left Q h as a proof of P ∨ Q. In most situations Lean can infer the proposition Q from context, and in that case you can use the shorter form Or.inl h as a proof of P ∨ Q. You can see the difference between Or.intro_left and Or.inl by using the #check command:\n\n@Or.intro_left : ∀ {a : Prop} (b : Prop), a → a ∨ b\n\n@Or.inl : ∀ {a b : Prop}, a → a ∨ b\n\nNotice that b is an implicit argument in Or.inl, but not in Or.intro_left.\nSimilarly, if you have h : Q, then Or.intro_right P h is a proof of P ∨ Q. In most situations Lean can infer P from context, and you can use the shorter form Or.inr h.\nOften, when your goal has the form P ∨ Q, you will be unable to prove P, and also unable to prove Q. Proof by cases can help in that situation as well (HTPI p. 145).\n\n\nTo prove a goal of the form P ∨ Q:\n\nBreak your proof into cases. In each case, either prove P or prove Q.\n\nExample 3.5.2 from HTPI illustrates these strategies:\n\n\ntheorem Example_3_5_2\n (U : Type) (A B C : Set U) :\n A \\ (B \\ C) ⊆ (A \\ B) ∪ C := by\n\n **done::\n\n\nU : Type\nA B C : Set U\n⊢ A \\ (B \\ C) ⊆ A \\ B ∪ C\n\n\nThe define tactic would rewrite the goal as ∀ ⦃a : U⦄, a ∈ A \\ (B \\ C) → a ∈ A \\ B ∪ C, which suggests that our next two tactics should be fix x : U and assume h1 : x ∈ A \\ (B \\ C). But as we have seen before, if you know what the result of the define tactic is going to be, then there is usually no need to use it. After introducing x as an arbitrary element of A \\ (B \\ C), we write out the definitions of our new given and goal to help guide our next strategy choice:\n\n\ntheorem Example_3_5_2\n (U : Type) (A B C : Set U) :\n A \\ (B \\ C) ⊆ (A \\ B) ∪ C := by\n fix x : U\n assume h1 : x ∈ A \\ (B \\ C)\n define; define at h1\n **done::\n\n\nU : Type\nA B C : Set U\nx : U\nh1 : x ∈ A ∧ x ∉ B \\ C\n⊢ x ∈ A \\ B ∨ x ∈ C\n\n\nThe goal is now a disjunction, which suggests that proof by cases might be helpful. But what cases should we use? The key is to look at the meaning of the right half of the given h1. The meaning of x ∉ B \\ C is ¬(x ∈ B ∧ x ∉ C), which, by one of the De Morgan laws, is equivalent to x ∉ B ∨ x ∈ C.\n\n\ntheorem Example_3_5_2\n (U : Type) (A B C : Set U) :\n A \\ (B \\ C) ⊆ (A \\ B) ∪ C := by\n fix x : U\n assume h1 : x ∈ A \\ (B \\ C)\n define; define at h1\n have h2 : x ∉ B \\ C := h1.right\n define at h2; demorgan at h2\n --h2 : x ∉ B ∨ x ∈ C\n **done::\n\n\nU : Type\nA B C : Set U\nx : U\nh1 : x ∈ A ∧ x ∉ B \\ C\nh2 : x ∉ B ∨ x ∈ C\n⊢ x ∈ A \\ B ∨ x ∈ C\n\n\nThe new given h2 is now a disjunction, which suggests what cases we should use:\n\n\ntheorem Example_3_5_2\n (U : Type) (A B C : Set U) :\n A \\ (B \\ C) ⊆ (A \\ B) ∪ C := by\n fix x : U\n assume h1 : x ∈ A \\ (B \\ C)\n define; define at h1\n have h2 : x ∉ B \\ C := h1.right\n define at h2; demorgan at h2\n --h2 : x ∉ B ∨ x ∈ C\n by_cases on h2\n **done::\n\n\ncase Case_1\nU : Type\nA B C : Set U\nx : U\nh1 : x ∈ A ∧ x ∉ B \\ C\nh2 : x ∉ B\n⊢ x ∈ A \\ B ∨ x ∈ C\ncase Case_2\nU : Type\nA B C : Set U\nx : U\nh1 : x ∈ A ∧ x ∉ B \\ C\nh2 : x ∈ C\n⊢ x ∈ A \\ B ∨ x ∈ C\n\n\nOf course, now that we have two goals, we will introduce bullets labeling the two parts of the proof as case 1 and case 2. Looking at the givens h1 and h2 in both cases, it is not hard to see that we should be able to prove x ∈ A \\ B in case 1 and x ∈ C in case 2. Thus, in case 1 we will be able to give a proof of the goal that has the form Or.inl _, where the blank will be filled in with a proof of x ∈ A \\ B, and in case 2 we can use Or.inr _, filling in the blank with a proof of x ∈ C. This suggests that we should use the tactics apply Or.inl in case 1 and apply Or.inr in case 2. Focusing first on case 1, we get:\n\n\ntheorem Example_3_5_2\n (U : Type) (A B C : Set U) :\n A \\ (B \\ C) ⊆ (A \\ B) ∪ C := by\n fix x : U\n assume h1 : x ∈ A \\ (B \\ C)\n define; define at h1\n have h2 : x ∉ B \\ C := h1.right\n define at h2; demorgan at h2\n --h2 : x ∉ B ∨ x ∈ C\n by_cases on h2\n · -- Case 1. h2 : x ∉ B\n apply Or.inl\n **done::\n · -- Case 2. h2 : x ∈ C\n\n **done::\n done\n\n\ncase Case_1.h\nU : Type\nA B C : Set U\nx : U\nh1 : x ∈ A ∧ x ∉ B \\ C\nh2 : x ∉ B\n⊢ x ∈ A \\ B\n\n\nNotice that the tactic apply Or.inl has changed the goal for case 1 to the left half of the original goal, x ∈ A \\ B. Since this means x ∈ A ∧ x ∉ B, we can complete case 1 by combining h1.left with h2, and then we can move on to case 2.\n\n\ntheorem Example_3_5_2\n (U : Type) (A B C : Set U) :\n A \\ (B \\ C) ⊆ (A \\ B) ∪ C := by\n fix x : U\n assume h1 : x ∈ A \\ (B \\ C)\n define; define at h1\n have h2 : x ∉ B \\ C := h1.right\n define at h2; demorgan at h2\n --h2 : x ∉ B ∨ x ∈ C\n by_cases on h2\n · -- Case 1. h2 : x ∉ B\n apply Or.inl\n show x ∈ A \\ B from And.intro h1.left h2\n done\n · -- Case 2. h2 : x ∈ C\n\n **done::\n done\n\n\ncase Case_2\nU : Type\nA B C : Set U\nx : U\nh1 : x ∈ A ∧ x ∉ B \\ C\nh2 : x ∈ C\n⊢ x ∈ A \\ B ∨ x ∈ C\n\n\nCase 2 is similar, using Or.inr and h2\n\n\ntheorem Example_3_5_2\n (U : Type) (A B C : Set U) :\n A \\ (B \\ C) ⊆ (A \\ B) ∪ C := by\n fix x : U\n assume h1 : x ∈ A \\ (B \\ C)\n define; define at h1\n have h2 : x ∉ B \\ C := h1.right\n define at h2; demorgan at h2\n --h2 : x ∉ B ∨ x ∈ C\n by_cases on h2\n · -- Case 1. h2 : x ∉ B\n apply Or.inl\n show x ∈ A \\ B from And.intro h1.left h2\n done\n · -- Case 2. h2 : x ∈ C\n apply Or.inr\n show x ∈ C from h2\n done\n done\n\n\nNo goals\n\n\nThere is a second strategy that is often useful to prove a goal of the form P ∨ Q. It is motivated by the fact that P ∨ Q is equivalent to both ¬P → Q and ¬Q → P (HTPI p. 147).\n\n\nTo prove a goal of the form P ∨ Q:\n\nAssume that P is false and prove Q, or assume that Q is false and prove P.\n\nIf your goal is P ∨ Q, then the Lean tactic or_left with h will add the new given h : ¬Q to the tactic state and set the goal to be P, and or_right with h will add h : ¬P to the tactic state and set the goal to be Q. For example, here is the effect of the tactic or_left with h:\n\n\n>> ⋮\n⊢ P ∨ Q\n\n\n>> ⋮\nh : ¬Q\n⊢ P\n\n\nNotice that or_left and or_right have the same effect as apply Or.inl and apply Or.inr, except that each adds a new given to the tactic state. Sometimes you can tell in advance that you won’t need the extra given, and in that case the tactics apply Or.inl and apply Or.inr can be useful. For example, that was the case in the example above. But if you think the extra given might be useful, you are better off using or_left or or_right. Here’s an example illustrating this.\n\n\nexample (U : Type) (A B C : Set U)\n (h1 : A \\ B ⊆ C) : A ⊆ B ∪ C := by\n \n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A \\ B ⊆ C\n⊢ A ⊆ B ∪ C\n\n\nOf course, we begin by letting x be an arbitrary element of A. Writing out the meaning of the new goal shows that it is a disjunction.\n\n\nexample (U : Type) (A B C : Set U)\n (h1 : A \\ B ⊆ C) : A ⊆ B ∪ C := by\n fix x : U\n assume h2 : x ∈ A\n define\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A \\ B ⊆ C\nx : U\nh2 : x ∈ A\n⊢ x ∈ B ∨ x ∈ C\n\n\nLooking at the givens h1 and h2, we see that if we assume x ∉ B, then we should be able to prove x ∈ C. This suggests that we should use the or_right tactic.\n\n\nexample (U : Type) (A B C : Set U)\n (h1 : A \\ B ⊆ C) : A ⊆ B ∪ C := by\n fix x : U\n assume h2 : x ∈ A\n define\n or_right with h3\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A \\ B ⊆ C\nx : U\nh2 : x ∈ A\nh3 : x ∉ B\n⊢ x ∈ C\n\n\nWe can now complete the proof. Notice that h1 _ will be a proof of the goal x ∈ C, if we can fill in the blank with a proof of x ∈ A \\ B. Since x ∈ A \\ B means x ∈ A ∧ x ∉ B, we can prove it with the expression And.intro h2 h3.\n\n\nexample (U : Type) (A B C : Set U)\n (h1 : A \\ B ⊆ C) : A ⊆ B ∪ C := by\n fix x : U\n assume h2 : x ∈ A\n define\n or_right with h3\n show x ∈ C from h1 (And.intro h2 h3)\n done\n\n\nNo goals\n\n\nThe fact that P ∨ Q is equivalent to both ¬P → Q and ¬Q → P also suggests another strategy for using a given that is a disjunction (HTPI p. 149).\n\n\nTo use a given of the form P ∨ Q:\n\nIf you are also given ¬P, or you can prove that P is false, then you can use this given to conclude that Q is true. Similarly, if you are given ¬Q or can prove that Q is false, then you can conclude that P is true.\n\nThis strategy is a rule of inference called disjunctive syllogism, and the tactic for using this strategy in Lean is called disj_syll. If you have h1 : P ∨ Q and h2 : ¬P, then the tactic disj_syll h1 h2 will change h1 to h1 : Q; if instead you have h2 : ¬Q, then disj_syll h1 h2 will change h1 to h1 : P. Notice that, as with the by_cases tactic, the given h1 gets replaced with the conclusion of the rule. The tactic disj_syll h1 h2 with h3 will preserve the original h1 and introduce the conclusion as a new given with the identifier h3. Also, as with the by_cases tactic, either h1 or h2 can be a complex proof rather than simply an identifier (although in that case it must be enclosed in parentheses, so that Lean can tell where h1 ends and h2 begins). The only requirement is that h1 must be a proof of a disjunction, and h2 must be a proof of the negation of one side of the disjunction. If h1 is not simply an identifier, then you will want to use with to specify the identifier to be used for the conclusion of the rule.\nHere’s an example illustrating the use of the disjunctive syllogism rule.\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ¬∃ (x : U),\n>> x ∈ A ∩ B\n⊢ A ⊆ C\n\n\nOf course, we begin by introducing an arbitrary element of A. We also rewrite h2 as an equivalent positive statement.\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n quant_neg at h2\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ∀ (x : U),\n>> x ∉ A ∩ B\na : U\nh3 : a ∈ A\n⊢ a ∈ C\n\n\nWe can now make two inferences by combining h1 with h3 and by applying h2 to a. To see how to use the inferred statements, we write out their definitions, and since one of them is a negative statement, we reexpress it as an equivalent positive statement.\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n quant_neg at h2\n have h4 : a ∈ B ∪ C := h1 h3\n have h5 : a ∉ A ∩ B := h2 a\n define at h4\n define at h5; demorgan at h5\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ∀ (x : U),\n>> x ∉ A ∩ B\na : U\nh3 : a ∈ A\nh4 : a ∈ B ∨ a ∈ C\nh5 : a ∉ A ∨ a ∉ B\n⊢ a ∈ C\n\n\nBoth h4 and h5 are disjunctions, and looking at h3 we see that the disjunctive syllogism rule can be applied. From h3 and h5 we can draw the conclusion a ∉ B, and then combining that conclusion with h4 we can infer a ∈ C. Since that is the goal, we are done.\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n quant_neg at h2\n have h4 : a ∈ B ∪ C := h1 h3\n have h5 : a ∉ A ∩ B := h2 a\n define at h4\n define at h5; demorgan at h5\n disj_syll h5 h3 --h5 : a ∉ B\n disj_syll h4 h5 --h4 : a ∈ C\n show a ∈ C from h4\n done\n\n\nNo goals\n\n\nWe’re going to redo the last example, to illustrate another useful technique in Lean. We start with some of the same steps as before.\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n have h4 : a ∈ B ∪ C := h1 h3\n define at h4\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ¬∃ (x : U),\n>> x ∈ A ∩ B\na : U\nh3 : a ∈ A\nh4 : a ∈ B ∨ a ∈ C\n⊢ a ∈ C\n\n\nAt this point, you might see a possible route to the goal: from h2 and h3 we should be able to prove that a ∉ B, and then, combining that with h4 by the disjunctive syllogism rule, we should be able to deduce the goal a ∈ C. Let’s try writing the proof that way.\n\n\n??example::\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n have h4 : a ∈ B ∪ C := h1 h3\n define at h4\n have h5 : a ∉ B := sorry\n disj_syll h4 h5 --h4 : a ∈ C\n show a ∈ C from h4\n done\n\n\nNo goals\n\n\nWe have introduced a new idea in this proof. The justification we have given for introducing h5 : a ∉ B is sorry. You might think of this as meaning “Sorry, I’m not going to give a justification for this statement, but please accept it anyway.” Of course, this is cheating; in a complete proof, every step must be justified. Lean accepts sorry as a proof of any statement, but it displays it in red to warn you that you’re cheating. It also puts a brown squiggle under the keyword example and it puts the message declaration uses 'sorry' in the Infoview, to warn you that, although the proof has reached the goal, it is not fully justified.\nAlthough writing the proof this way is cheating, it is a convenient way to see that our plan of attack for this proof is reasonable. Lean has accepted the proof, except for the warning that we have used sorry. So now we know that if we go back and replace sorry with a proof of a ∉ B, then we will have a complete proof.\nThe proof of a ∉ B is hard enough that it is easier to do it in tactic mode rather than term mode. So we will begin the proof as we always do for tactic-mode proofs: we replace sorry with by, leave a blank line, and then put done, indented further than the surrounding text. When we put the cursor on the blank line before done, we see the tactic state for our “proof within a proof.”\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n have h4 : a ∈ B ∪ C := h1 h3\n define at h4\n have h5 : a ∉ B := by\n\n **done::\n disj_syll h4 h5 --h4 : a ∈ C\n show a ∈ C from h4\n done\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ¬∃ (x : U),\n>> x ∈ A ∩ B\na : U\nh3 : a ∈ A\nh4 : a ∈ B ∨ a ∈ C\n⊢ a ∉ B\n\n\nNote that h5 : a ∉ B is not a given in the tactic state, because we have not yet justified it; in fact, a ∉ B is the goal. This goal is a negative statement, and h2 is also negative. This suggests that we could try using proof by contradiction, achieving the contradiction by contradicting h2. So we use the tactic contradict h2 with h6.\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n have h4 : a ∈ B ∪ C := h1 h3\n define at h4\n have h5 : a ∉ B := by\n contradict h2 with h6\n **done::\n disj_syll h4 h5 --h4 : a ∈ C\n show a ∈ C from h4\n done\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ¬∃ (x : U),\n>> x ∈ A ∩ B\na : U\nh3 : a ∈ A\nh4 : a ∈ B ∨ a ∈ C\nh6 : a ∈ B\n⊢ ∃ (x : U), x ∈ A ∩ B\n\n\nLooking at h3 and h6, we see that the right value to plug in for x in the goal is a. In fact, Exists.intro a _ will prove the goal, if we can fill in the blank with a proof of a ∈ A ∩ B. Since this means a ∈ A ∧ a ∈ B, we can prove it with And.intro h3 h6. Thus, we can complete the proof in one more step:\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n have h4 : a ∈ B ∪ C := h1 h3\n define at h4\n have h5 : a ∉ B := by\n contradict h2 with h6\n show ∃ (x : U), x ∈ A ∩ B from\n Exists.intro a (And.intro h3 h6)\n done\n disj_syll h4 h5 --h4 : a ∈ C\n show a ∈ C from h4\n done\n\n\nNo goals\n\n\nThe red squiggle has disappeared from the word done, indicating that the proof is complete.\nIt was not really necessary for us to use sorry when writing this proof. We could have simply written the steps in order, exactly as they appear above. Any time you use the have tactic with a conclusion that is difficult to justify, you have a choice. You can establish the have with sorry, complete the proof, and then return and fill in a justification for the have, as we did in the example above. Or, you can justify the have right away by typing by after := and then plunging into the “proof within in a proof.” Once you complete the inner proof, you can continue with the original proof.\nAnd in case you were wondering: yes, if the inner proof uses the have tactic with a statement that is hard to justify, then you can write a “proof within a proof within a proof”!\n\n\nExercises\nIn each case, replace sorry with a proof.\n\ntheorem Exercise_3_5_2 (U : Type) (A B C : Set U) :\n (A ∪ B) \\ C ⊆ A ∪ (B \\ C) := sorry\n\n\ntheorem Exercise_3_5_5 (U : Type) (A B C : Set U)\n (h1 : A ∩ C ⊆ B ∩ C) (h2 : A ∪ C ⊆ B ∪ C) : A ⊆ B := sorry\n\n\ntheorem Exercise_3_5_7 (U : Type) (A B C : Set U) :\n A ∪ C ⊆ B ∪ C ↔ A \\ C ⊆ B \\ C := sorry\n\n\ntheorem Exercise_3_5_8 (U : Type) (A B : Set U) :\n 𝒫 A ∪ 𝒫 B ⊆ 𝒫 (A ∪ B) := sorry\n\n\ntheorem Exercise_3_5_17b (U : Type) (F : Set (Set U)) (B : Set U) :\n B ∪ (⋂₀ F) = {x : U | ∀ (A : Set U), A ∈ F → x ∈ B ∪ A} := sorry\n\n\ntheorem Exercise_3_5_18 (U : Type) (F G H : Set (Set U))\n (h1 : ∀ (A : Set U), A ∈ F → ∀ (B : Set U), B ∈ G → A ∪ B ∈ H) :\n ⋂₀ H ⊆ (⋂₀ F) ∪ (⋂₀ G) := sorry\n\n\ntheorem Exercise_3_5_24a (U : Type) (A B C : Set U) :\n (A ∪ B) △ C ⊆ (A △ C) ∪ (B △ C) := sorry" + "text": "3.5. Proofs Involving Disjunctions\nA common proof method for dealing with givens or goals that are disjunctions is proof by cases. Here’s how it works (HTPI p. 143).\n\nTo use a given of the form P ∨ Q:\n\nBreak your proof into cases. For case 1, assume that P is true and use this assumption to prove the goal. For case 2, assume that Q is true and prove the goal.\n\nIn Lean, you can break a proof into cases by using the by_cases tactic. If you have a given h : P ∨ Q, then the tactic by_cases on h will break your proof into two cases. For the first case, the given h will be changed to h : P, and for the second, it will be changed to h : Q; the goal for both cases will be the same as the original goal. Thus, the effect of the by_cases on h tactic is as follows:\n\n\n>> ⋮\nh : P ∨ Q\n⊢ goal\n\n\ncase Case_1\n>> ⋮\nh : P\n⊢ goal\ncase Case_2\n>> ⋮\nh : Q\n⊢ goal\n\n\nNotice that the original given h : P ∨ Q gets replaced by h : P in case 1 and h : Q in case 2. This is usually what is most convenient, but if you write by_cases on h with h1, then the original given h will be preserved, and new givens h1 : P and h1 : Q will be added to cases 1 and 2, respectively. If you want different names for the new givens in the two cases, then use by_cases on h with h1, h2 to add the new given h1 : P in case 1 and h2 : Q in case 2.\nYou can follow by_cases on with any proof of a disjunction, even if that proof is not just a single identifier. In that cases you will want to add with to specify the identifier or identifiers to be used for the new assumptions in the two cases. Another variant is that you can use the tactic by_cases h : P to break your proof into two cases, with the new assumptions being h : P in case 1 and h : ¬P in case 2. In other words, the effect of by_cases h : P is the same as adding the new given h : P ∨ ¬P (which, of course, is a tautology) and then using the tactic by_cases on h.\nThere are several introduction rules that you can use in Lean to prove a goal of the form P ∨ Q. If you have h : P, then Lean will accept Or.intro_left Q h as a proof of P ∨ Q. In most situations Lean can infer the proposition Q from context, and in that case you can use the shorter form Or.inl h as a proof of P ∨ Q. You can see the difference between Or.intro_left and Or.inl by using the #check command:\n\n@Or.intro_left : ∀ {a : Prop} (b : Prop), a → a ∨ b\n\n@Or.inl : ∀ {a b : Prop}, a → a ∨ b\n\nNotice that b is an implicit argument in Or.inl, but not in Or.intro_left.\nSimilarly, if you have h : Q, then Or.intro_right P h is a proof of P ∨ Q. In most situations Lean can infer P from context, and you can use the shorter form Or.inr h.\nOften, when your goal has the form P ∨ Q, you will be unable to prove P, and also unable to prove Q. Proof by cases can help in that situation as well (HTPI p. 145).\n\n\nTo prove a goal of the form P ∨ Q:\n\nBreak your proof into cases. In each case, either prove P or prove Q.\n\nExample 3.5.2 from HTPI illustrates these strategies:\n\n\ntheorem Example_3_5_2\n (U : Type) (A B C : Set U) :\n A \\ (B \\ C) ⊆ (A \\ B) ∪ C := by\n\n **done::\n\n\nU : Type\nA B C : Set U\n⊢ A \\ (B \\ C) ⊆ A \\ B ∪ C\n\n\nThe define tactic would rewrite the goal as ∀ ⦃a : U⦄, a ∈ A \\ (B \\ C) → a ∈ A \\ B ∪ C, which suggests that our next two tactics should be fix x : U and assume h1 : x ∈ A \\ (B \\ C). But as we have seen before, if you know what the result of the define tactic is going to be, then there is usually no need to use it. After introducing x as an arbitrary element of A \\ (B \\ C), we write out the definitions of our new given and goal to help guide our next strategy choice:\n\n\ntheorem Example_3_5_2\n (U : Type) (A B C : Set U) :\n A \\ (B \\ C) ⊆ (A \\ B) ∪ C := by\n fix x : U\n assume h1 : x ∈ A \\ (B \\ C)\n define; define at h1\n **done::\n\n\nU : Type\nA B C : Set U\nx : U\nh1 : x ∈ A ∧ x ∉ B \\ C\n⊢ x ∈ A \\ B ∨ x ∈ C\n\n\nThe goal is now a disjunction, which suggests that proof by cases might be helpful. But what cases should we use? The key is to look at the meaning of the right half of the given h1. The meaning of x ∉ B \\ C is ¬(x ∈ B ∧ x ∉ C), which, by one of the De Morgan laws, is equivalent to x ∉ B ∨ x ∈ C.\n\n\ntheorem Example_3_5_2\n (U : Type) (A B C : Set U) :\n A \\ (B \\ C) ⊆ (A \\ B) ∪ C := by\n fix x : U\n assume h1 : x ∈ A \\ (B \\ C)\n define; define at h1\n have h2 : x ∉ B \\ C := h1.right\n define at h2; demorgan at h2\n --h2 : x ∉ B ∨ x ∈ C\n **done::\n\n\nU : Type\nA B C : Set U\nx : U\nh1 : x ∈ A ∧ x ∉ B \\ C\nh2 : x ∉ B ∨ x ∈ C\n⊢ x ∈ A \\ B ∨ x ∈ C\n\n\nThe new given h2 is now a disjunction, which suggests what cases we should use:\n\n\ntheorem Example_3_5_2\n (U : Type) (A B C : Set U) :\n A \\ (B \\ C) ⊆ (A \\ B) ∪ C := by\n fix x : U\n assume h1 : x ∈ A \\ (B \\ C)\n define; define at h1\n have h2 : x ∉ B \\ C := h1.right\n define at h2; demorgan at h2\n --h2 : x ∉ B ∨ x ∈ C\n by_cases on h2\n **done::\n\n\ncase Case_1\nU : Type\nA B C : Set U\nx : U\nh1 : x ∈ A ∧ x ∉ B \\ C\nh2 : x ∉ B\n⊢ x ∈ A \\ B ∨ x ∈ C\ncase Case_2\nU : Type\nA B C : Set U\nx : U\nh1 : x ∈ A ∧ x ∉ B \\ C\nh2 : x ∈ C\n⊢ x ∈ A \\ B ∨ x ∈ C\n\n\nOf course, now that we have two goals, we will introduce bullets labeling the two parts of the proof as case 1 and case 2. Looking at the givens h1 and h2 in both cases, it is not hard to see that we should be able to prove x ∈ A \\ B in case 1 and x ∈ C in case 2. Thus, in case 1 we will be able to give a proof of the goal that has the form Or.inl _, where the blank will be filled in with a proof of x ∈ A \\ B, and in case 2 we can use Or.inr _, filling in the blank with a proof of x ∈ C. This suggests that we should use the tactics apply Or.inl in case 1 and apply Or.inr in case 2. Focusing first on case 1, we get:\n\n\ntheorem Example_3_5_2\n (U : Type) (A B C : Set U) :\n A \\ (B \\ C) ⊆ (A \\ B) ∪ C := by\n fix x : U\n assume h1 : x ∈ A \\ (B \\ C)\n define; define at h1\n have h2 : x ∉ B \\ C := h1.right\n define at h2; demorgan at h2\n --h2 : x ∉ B ∨ x ∈ C\n by_cases on h2\n · -- Case 1. h2 : x ∉ B\n apply Or.inl\n **done::\n · -- Case 2. h2 : x ∈ C\n\n **done::\n done\n\n\ncase Case_1.h\nU : Type\nA B C : Set U\nx : U\nh1 : x ∈ A ∧ x ∉ B \\ C\nh2 : x ∉ B\n⊢ x ∈ A \\ B\n\n\nNotice that the tactic apply Or.inl has changed the goal for case 1 to the left half of the original goal, x ∈ A \\ B. Since this means x ∈ A ∧ x ∉ B, we can complete case 1 by combining h1.left with h2, and then we can move on to case 2.\n\n\ntheorem Example_3_5_2\n (U : Type) (A B C : Set U) :\n A \\ (B \\ C) ⊆ (A \\ B) ∪ C := by\n fix x : U\n assume h1 : x ∈ A \\ (B \\ C)\n define; define at h1\n have h2 : x ∉ B \\ C := h1.right\n define at h2; demorgan at h2\n --h2 : x ∉ B ∨ x ∈ C\n by_cases on h2\n · -- Case 1. h2 : x ∉ B\n apply Or.inl\n show x ∈ A \\ B from And.intro h1.left h2\n done\n · -- Case 2. h2 : x ∈ C\n\n **done::\n done\n\n\ncase Case_2\nU : Type\nA B C : Set U\nx : U\nh1 : x ∈ A ∧ x ∉ B \\ C\nh2 : x ∈ C\n⊢ x ∈ A \\ B ∨ x ∈ C\n\n\nCase 2 is similar, using Or.inr and h2\n\n\ntheorem Example_3_5_2\n (U : Type) (A B C : Set U) :\n A \\ (B \\ C) ⊆ (A \\ B) ∪ C := by\n fix x : U\n assume h1 : x ∈ A \\ (B \\ C)\n define; define at h1\n have h2 : x ∉ B \\ C := h1.right\n define at h2; demorgan at h2\n --h2 : x ∉ B ∨ x ∈ C\n by_cases on h2\n · -- Case 1. h2 : x ∉ B\n apply Or.inl\n show x ∈ A \\ B from And.intro h1.left h2\n done\n · -- Case 2. h2 : x ∈ C\n apply Or.inr\n show x ∈ C from h2\n done\n done\n\n\nNo goals\n\n\nThere is a second strategy that is often useful to prove a goal of the form P ∨ Q. It is motivated by the fact that P ∨ Q is equivalent to both ¬P → Q and ¬Q → P (HTPI p. 147).\n\n\nTo prove a goal of the form P ∨ Q:\n\nAssume that P is false and prove Q, or assume that Q is false and prove P.\n\nIf your goal is P ∨ Q, then the Lean tactic or_left with h will add the new given h : ¬Q to the tactic state and set the goal to be P, and or_right with h will add h : ¬P to the tactic state and set the goal to be Q. For example, here is the effect of the tactic or_left with h:\n\n\n>> ⋮\n⊢ P ∨ Q\n\n\n>> ⋮\nh : ¬Q\n⊢ P\n\n\nNotice that or_left and or_right have the same effect as apply Or.inl and apply Or.inr, except that each adds a new given to the tactic state. Sometimes you can tell in advance that you won’t need the extra given, and in that case the tactics apply Or.inl and apply Or.inr can be useful. For example, that was the case in the example above. But if you think the extra given might be useful, you are better off using or_left or or_right. Here’s an example illustrating this.\n\n\nexample (U : Type) (A B C : Set U)\n (h1 : A \\ B ⊆ C) : A ⊆ B ∪ C := by\n \n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A \\ B ⊆ C\n⊢ A ⊆ B ∪ C\n\n\nOf course, we begin by letting x be an arbitrary element of A. Writing out the meaning of the new goal shows that it is a disjunction.\n\n\nexample (U : Type) (A B C : Set U)\n (h1 : A \\ B ⊆ C) : A ⊆ B ∪ C := by\n fix x : U\n assume h2 : x ∈ A\n define\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A \\ B ⊆ C\nx : U\nh2 : x ∈ A\n⊢ x ∈ B ∨ x ∈ C\n\n\nLooking at the givens h1 and h2, we see that if we assume x ∉ B, then we should be able to prove x ∈ C. This suggests that we should use the or_right tactic.\n\n\nexample (U : Type) (A B C : Set U)\n (h1 : A \\ B ⊆ C) : A ⊆ B ∪ C := by\n fix x : U\n assume h2 : x ∈ A\n define\n or_right with h3\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A \\ B ⊆ C\nx : U\nh2 : x ∈ A\nh3 : x ∉ B\n⊢ x ∈ C\n\n\nWe can now complete the proof. Notice that h1 _ will be a proof of the goal x ∈ C, if we can fill in the blank with a proof of x ∈ A \\ B. Since x ∈ A \\ B means x ∈ A ∧ x ∉ B, we can prove it with the expression And.intro h2 h3.\n\n\nexample (U : Type) (A B C : Set U)\n (h1 : A \\ B ⊆ C) : A ⊆ B ∪ C := by\n fix x : U\n assume h2 : x ∈ A\n define\n or_right with h3\n show x ∈ C from h1 (And.intro h2 h3)\n done\n\n\nNo goals\n\n\nThe fact that P ∨ Q is equivalent to both ¬P → Q and ¬Q → P also suggests another strategy for using a given that is a disjunction (HTPI p. 149).\n\n\nTo use a given of the form P ∨ Q:\n\nIf you are also given ¬P, or you can prove that P is false, then you can use this given to conclude that Q is true. Similarly, if you are given ¬Q or can prove that Q is false, then you can conclude that P is true.\n\nThis strategy is a rule of inference called disjunctive syllogism, and the tactic for using this strategy in Lean is called disj_syll. If you have h1 : P ∨ Q and h2 : ¬P, then the tactic disj_syll h1 h2 will change h1 to h1 : Q; if instead you have h2 : ¬Q, then disj_syll h1 h2 will change h1 to h1 : P. Notice that, as with the by_cases tactic, the given h1 gets replaced with the conclusion of the rule. The tactic disj_syll h1 h2 with h3 will preserve the original h1 and introduce the conclusion as a new given with the identifier h3. Also, as with the by_cases tactic, either h1 or h2 can be a complex proof rather than simply an identifier (although in that case it must be enclosed in parentheses, so that Lean can tell where h1 ends and h2 begins). The only requirement is that h1 must be a proof of a disjunction, and h2 must be a proof of the negation of one side of the disjunction. If h1 is not simply an identifier, then you will want to use with to specify the identifier to be used for the conclusion of the rule.\nHere’s an example illustrating the use of the disjunctive syllogism rule.\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ¬∃ (x : U),\n>> x ∈ A ∩ B\n⊢ A ⊆ C\n\n\nOf course, we begin by introducing an arbitrary element of A. We also rewrite h2 as an equivalent positive statement.\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n quant_neg at h2\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ∀ (x : U),\n>> x ∉ A ∩ B\na : U\nh3 : a ∈ A\n⊢ a ∈ C\n\n\nWe can now make two inferences by combining h1 with h3 and by applying h2 to a. To see how to use the inferred statements, we write out their definitions, and since one of them is a negative statement, we reexpress it as an equivalent positive statement.\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n quant_neg at h2\n have h4 : a ∈ B ∪ C := h1 h3\n have h5 : a ∉ A ∩ B := h2 a\n define at h4\n define at h5; demorgan at h5\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ∀ (x : U),\n>> x ∉ A ∩ B\na : U\nh3 : a ∈ A\nh4 : a ∈ B ∨ a ∈ C\nh5 : a ∉ A ∨ a ∉ B\n⊢ a ∈ C\n\n\nBoth h4 and h5 are disjunctions, and looking at h3 we see that the disjunctive syllogism rule can be applied. From h3 and h5 we can draw the conclusion a ∉ B, and then combining that conclusion with h4 we can infer a ∈ C. Since that is the goal, we are done.\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n quant_neg at h2\n have h4 : a ∈ B ∪ C := h1 h3\n have h5 : a ∉ A ∩ B := h2 a\n define at h4\n define at h5; demorgan at h5\n disj_syll h5 h3 --h5 : a ∉ B\n disj_syll h4 h5 --h4 : a ∈ C\n show a ∈ C from h4\n done\n\n\nNo goals\n\n\nWe’re going to redo the last example, to illustrate another useful technique in Lean. We start with some of the same steps as before.\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n have h4 : a ∈ B ∪ C := h1 h3\n define at h4\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ¬∃ (x : U),\n>> x ∈ A ∩ B\na : U\nh3 : a ∈ A\nh4 : a ∈ B ∨ a ∈ C\n⊢ a ∈ C\n\n\nAt this point, you might see a possible route to the goal: from h2 and h3 we should be able to prove that a ∉ B, and then, combining that with h4 by the disjunctive syllogism rule, we should be able to deduce the goal a ∈ C. Let’s try writing the proof that way.\n\n\n??example::\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n have h4 : a ∈ B ∪ C := h1 h3\n define at h4\n have h5 : a ∉ B := sorry\n disj_syll h4 h5 --h4 : a ∈ C\n show a ∈ C from h4\n done\n\n\nNo goals\n\n\nWe have introduced a new idea in this proof. The justification we have given for introducing h5 : a ∉ B is sorry. You might think of this as meaning “Sorry, I’m not going to give a justification for this statement, but please accept it anyway.” Of course, this is cheating; in a complete proof, every step must be justified. Lean accepts sorry as a proof of any statement, but it displays it in red to warn you that you’re cheating. It also puts a brown squiggle under the keyword example and it puts the message declaration uses 'sorry' in the Infoview, to warn you that, although the proof has reached the goal, it is not fully justified.\nAlthough writing the proof this way is cheating, it is a convenient way to see that our plan of attack for this proof is reasonable. Lean has accepted the proof, except for the warning that we have used sorry. So now we know that if we go back and replace sorry with a proof of a ∉ B, then we will have a complete proof.\nThe proof of a ∉ B is hard enough that it is easier to do it in tactic mode rather than term mode. So we will begin the proof as we always do for tactic-mode proofs: we replace sorry with by, leave a blank line, and then put done, indented further than the surrounding text. When we put the cursor on the blank line before done, we see the tactic state for our “proof within a proof.”\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n have h4 : a ∈ B ∪ C := h1 h3\n define at h4\n have h5 : a ∉ B := by\n\n **done::\n disj_syll h4 h5 --h4 : a ∈ C\n show a ∈ C from h4\n done\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ¬∃ (x : U),\n>> x ∈ A ∩ B\na : U\nh3 : a ∈ A\nh4 : a ∈ B ∨ a ∈ C\n⊢ a ∉ B\n\n\nNote that h5 : a ∉ B is not a given in the tactic state, because we have not yet justified it; in fact, a ∉ B is the goal. This goal is a negative statement, and h2 is also negative. This suggests that we could try using proof by contradiction, achieving the contradiction by contradicting h2. So we use the tactic contradict h2 with h6.\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n have h4 : a ∈ B ∪ C := h1 h3\n define at h4\n have h5 : a ∉ B := by\n contradict h2 with h6\n **done::\n disj_syll h4 h5 --h4 : a ∈ C\n show a ∈ C from h4\n done\n\n\nU : Type\nA B C : Set U\nh1 : A ⊆ B ∪ C\nh2 : ¬∃ (x : U),\n>> x ∈ A ∩ B\na : U\nh3 : a ∈ A\nh4 : a ∈ B ∨ a ∈ C\nh6 : a ∈ B\n⊢ ∃ (x : U), x ∈ A ∩ B\n\n\nLooking at h3 and h6, we see that the right value to plug in for x in the goal is a. In fact, Exists.intro a _ will prove the goal, if we can fill in the blank with a proof of a ∈ A ∩ B. Since this means a ∈ A ∧ a ∈ B, we can prove it with And.intro h3 h6. Thus, we can complete the proof in one more step:\n\n\nexample\n (U : Type) (A B C : Set U) (h1 : A ⊆ B ∪ C)\n (h2 : ¬∃ (x : U), x ∈ A ∩ B) : A ⊆ C := by\n fix a : U\n assume h3 : a ∈ A\n have h4 : a ∈ B ∪ C := h1 h3\n define at h4\n have h5 : a ∉ B := by\n contradict h2 with h6\n show ∃ (x : U), x ∈ A ∩ B from\n Exists.intro a (And.intro h3 h6)\n done\n disj_syll h4 h5 --h4 : a ∈ C\n show a ∈ C from h4\n done\n\n\nNo goals\n\n\nThe red squiggle has disappeared from the word done, indicating that the proof is complete.\nIt was not really necessary for us to use sorry when writing this proof. We could have simply written the steps in order, exactly as they appear above. Any time you use the have tactic with a conclusion that is difficult to justify, you have a choice. You can establish the have with sorry, complete the proof, and then return and fill in a justification for the have, as we did in the example above. Or, you can justify the have right away by typing by after := and then plunging into the “proof within in a proof.” Once you complete the inner proof, you can continue with the original proof.\nAnd in case you were wondering: yes, if the inner proof uses the have tactic with a statement that is hard to justify, then you can write a “proof within a proof within a proof”!\n\n\nExercises\nIn each case, replace sorry with a proof.\n\ntheorem Exercise_3_5_2 (U : Type) (A B C : Set U) :\n (A ∪ B) \\ C ⊆ A ∪ (B \\ C) := sorry\n\n\ntheorem Exercise_3_5_5 (U : Type) (A B C : Set U)\n (h1 : A ∩ C ⊆ B ∩ C) (h2 : A ∪ C ⊆ B ∪ C) : A ⊆ B := sorry\n\n\ntheorem Exercise_3_5_7 (U : Type) (A B C : Set U) :\n A ∪ C ⊆ B ∪ C ↔ A \\ C ⊆ B \\ C := sorry\n\n\ntheorem Exercise_3_5_8 (U : Type) (A B : Set U) :\n 𝒫 A ∪ 𝒫 B ⊆ 𝒫 (A ∪ B) := sorry\n\n\ntheorem Exercise_3_5_17b (U : Type) (F : Set (Set U)) (B : Set U) :\n B ∪ (⋂₀ F) = {x : U | ∀ (A : Set U), A ∈ F → x ∈ B ∪ A} := sorry\n\n\ntheorem Exercise_3_5_18 (U : Type) (F G H : Set (Set U))\n (h1 : ∀ (A : Set U), A ∈ F → ∀ (B : Set U), B ∈ G → A ∪ B ∈ H) :\n ⋂₀ H ⊆ (⋂₀ F) ∪ (⋂₀ G) := sorry\n\n\ntheorem Exercise_3_5_24a (U : Type) (A B C : Set U) :\n (A ∪ B) ∆ C ⊆ (A ∆ C) ∪ (B ∆ C) := sorry" }, { "objectID": "Chap3.html#existence-and-uniqueness-proofs", "href": "Chap3.html#existence-and-uniqueness-proofs", "title": "3  Proofs", "section": "3.6. Existence and Uniqueness Proofs", - "text": "3.6. Existence and Uniqueness Proofs\nRecall that ∃! (x : U), P x means that there is exactly one x of type U such that P x is true. One way to deal with a given or goal of this form is to use the define tactic to rewrite it as the equivalent statement ∃ (x : U), P x ∧ ∀ (x_1 : U), P x_1 → x_1 = x. You can then apply techniques discussed previously in this chapter. However, there are also proof techniques, and corresponding Lean tactics, for working directly with givens and goals of this form.\nOften a goal of the form ∃! (x : U), P x is proven by using the following strategy. This is a slight rephrasing of the strategy presented in HTPI. The rephrasing is based on the fact that for any propositions A, B, and C, A ∧ B → C is equivalent to A → B → C (you can check this equivalence by making a truth table). The second of these statements is usually easier to work with in Lean than the first one, so we will often rephrase statements that have the form A ∧ B → C as A → B → C. To see why the second statement is easier to use, suppose that you have givens hA : A and hB : B. If you also have h : A → B → C, then h hA is a proof of B → C, and therefore h hA hB is a proof of C. If instead you had h' : (A ∧ B) → C, then to prove C you would have to write h' (And.intro hA hB), which is a bit less convenient.\nWith that preparation, here is our strategy for proving statements of the form ∃! (x : U), P x (HTPI pp. 156–157).\n\nTo prove a goal of the form ∃! (x : U), P x:\n\nProve ∃ (x : U), P x and ∀ (x_1 x_2 : U), P x_1 → P x_2 → x_1 = x_2. The first of these goals says that there exists an x such that P x is true, and the second says that it is unique. The two parts of the proof are therefore sometimes labeled existence and uniqueness.\n\nTo apply this strategy in a Lean proof, we use the tactic exists_unique. We’ll illustrate this with the theorem from Example 3.6.2 in HTPI. Here’s how that theorem and its proof are presented in HTPI (HTPI pp. 157–158):\n\nThere is a unique set \\(A\\) such that for every set \\(B\\), \\(A \\cup B = B\\).\n\n\nProof. Existence: Clearly \\(\\forall B(\\varnothing \\cup B = B)\\), so \\(\\varnothing\\) has the required property.\nUniqueness: Suppose \\(\\forall B(C \\cup B = B)\\) and \\(\\forall B(D \\cup B = B)\\). Applying the first of these assumptions to \\(D\\) we see that \\(C \\cup D = D\\), and applying the second to \\(C\\) we get \\(D \\cup C = C\\). But clearly \\(C \\cup D = D \\cup C\\), so \\(C = D\\).  □\n\nYou will notice that there are two statements in this proof that are described as “clearly” true. This brings up one of the difficulties with proving theorems in Lean: things that are clear to us are not necessarily clear to Lean! There are two ways to deal with such “clear” statements. The first is to see if the statement is in the library of theorems that Lean knows. The second is to prove the statement as a preliminary theorem that can then be used in the proof of our main theorem. We’ll take the second approach here, since proving these “clear” facts will give us more practice with Lean proofs, but later we’ll have more to say about searching for statements in Lean’s theorem library.\nThe first theorem we need says that for every set B, ∅ ∪ B = B, and it brings up a subtle issue: in Lean, the symbol ∅ is ambiguous! The reason for this is Lean’s strict typing rules. For each type U, there is an empty set of type Set U. There is, for example, the set of type Set Nat that contains no natural numbers, and also the set of type Set Real that contains no real numbers. To Lean, these are different sets, because they have different types. Which one does the symbol ∅ denote? The answer will be different in different contexts. Lean can often figure out from context which empty set you have in mind, but if it can’t, then you have to tell it explicitly by writing (∅ : Set U) rather than ∅. Fortunately, in our theorems Lean is able to figure out which empty set we have in mind.\nWith that preparation, we are ready to prove our first preliminary theorem. Since the goal is an equation between sets, our first step is to use the tactic apply Set.ext.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n **done::\n\n\ncase h\nU : Type\nB : Set U\n⊢ ∀ (x : U),\n>> x ∈ ∅ ∪ B ↔ x ∈ B\n\n\nBased on the form of the goal, our next two tactics should be fix x : U and apply Iff.intro. This leaves us with two goals, corresponding to the two directions of the biconditional, but we’ll focus first on just the left-to-right direction.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n fix x : U\n apply Iff.intro\n · -- (→)\n\n **done::\n · -- (←)\n\n **done::\n done\n\n\ncase h.mp\nU : Type\nB : Set U\nx : U\n⊢ x ∈ ∅ ∪ B → x ∈ B\n\n\nOf course, our next step is to assume x ∈ ∅ ∪ B. To help us see how to move forward, we also write out the definition of this assumption.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n fix x : U\n apply Iff.intro\n · -- (→)\n assume h1 : x ∈ ∅ ∪ B\n define at h1\n **done::\n · -- (←)\n\n **done::\n done\n\n\ncase h.mp\nU : Type\nB : Set U\nx : U\nh1 : x ∈ ∅ ∨ x ∈ B\n⊢ x ∈ B\n\n\nNow you should see a way to complete the proof: the statement x ∈ ∅ is false, so we should be able to apply the disjunctive syllogism rule to h1 to infer the goal x ∈ B. To carry out this plan, we’ll first have to prove x ∉ ∅. We’ll use the have tactic, and since there’s no obvious term-mode proof to justify it, we’ll try a tactic-mode proof.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n fix x : U\n apply Iff.intro\n · -- (→)\n assume h1 : x ∈ ∅ ∪ B\n define at h1\n have h2 : x ∉ ∅ := by\n\n **done::\n **done::\n · -- (←)\n\n **done::\n done\n\n\nU : Type\nB : Set U\nx : U\nh1 : x ∈ ∅ ∨ x ∈ B\n⊢ x ∉ ∅\n\n\nThe goal for our “proof within a proof” is a negative statement, so proof by contradiction seems like a good start.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n fix x : U\n apply Iff.intro\n · -- (→)\n assume h1 : x ∈ ∅ ∪ B\n define at h1\n have h2 : x ∉ ∅ := by\n by_contra h3\n **done::\n **done::\n · -- (←)\n\n **done::\n done\n\n\nU : Type\nB : Set U\nx : U\nh1 : x ∈ ∅ ∨ x ∈ B\nh3 : x ∈ ∅\n⊢ False\n\n\nTo see how to use the new assumption h3, we use the tactic define at h3. The definition Lean gives for the statement x ∈ ∅ is False. In other words, Lean knows that, by the definition of ∅, the statement x ∈ ∅ is false. Since False is our goal, this completes the inner proof, and we can return to the main proof.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n fix x : U\n apply Iff.intro\n · -- (→)\n assume h1 : x ∈ ∅ ∪ B\n define at h1\n have h2 : x ∉ ∅ := by\n by_contra h3\n define at h3 --h3 : False\n show False from h3\n done\n **done::\n · -- (←)\n\n **done::\n done\n\n\ncase h.mp\nU : Type\nB : Set U\nx : U\nh1 : x ∈ ∅ ∨ x ∈ B\nh2 : x ∉ ∅\n⊢ x ∈ B\n\n\nNow that we have established the claim h2 : x ∉ ∅, we can apply the disjunctive syllogism rule to h1 and h2 to reach the goal. This completes the left-to-right direction of the biconditional proof, so we move on to the right-to-left direction.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n fix x : U\n apply Iff.intro\n · -- (→)\n assume h1 : x ∈ ∅ ∪ B\n define at h1\n have h2 : x ∉ ∅ := by\n by_contra h3\n define at h3 --h3 : False\n show False from h3\n done\n disj_syll h1 h2 --h1 : x ∈ B\n show x ∈ B from h1\n done\n · -- (←)\n\n **done::\n done\n\n\ncase h.mpr\nU : Type\nB : Set U\nx : U\n⊢ x ∈ B → x ∈ ∅ ∪ B\n\n\nThis direction of the biconditional proof is easier: once we introduce the assumption h1 : x ∈ B, our goal will be x ∈ ∅ ∪ B, which means x ∈ ∅ ∨ x ∈ B, and we can prove it with the proof Or.inr h1.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n fix x : U\n apply Iff.intro\n · -- (→)\n assume h1 : x ∈ ∅ ∪ B\n define at h1\n have h2 : x ∉ ∅ := by\n by_contra h3\n define at h3 --h3 : False\n show False from h3\n done\n disj_syll h1 h2 --h1 : x ∈ B\n show x ∈ B from h1\n done\n · -- (←)\n assume h1 : x ∈ B\n show x ∈ ∅ ∪ B from Or.inr h1\n done\n done\n\n\nNo goals\n\n\nThe second fact that was called “clear” in the proof from Example 3.6.2 was the equation C ∪ D = D ∪ C. This looks like an instance of the commutativity of the union operator. Let’s prove that union is commutative.\n\n\ntheorem union_comm {U : Type} (X Y : Set U) :\n X ∪ Y = Y ∪ X := by\n \n **done::\n\n\nU : Type\nX Y : Set U\n⊢ X ∪ Y = Y ∪ X\n\n\nOnce again, we begin with apply Set.ext, which converts the goal to ∀ (x : U), x ∈ X ∪ Y ↔︎ x ∈ Y ∪ X, and then fix x : U.\n\n\ntheorem union_comm {U : Type} (X Y : Set U) :\n X ∪ Y = Y ∪ X := by\n apply Set.ext\n fix x : U\n **done::\n\n\ncase h\nU : Type\nX Y : Set U\nx : U\n⊢ x ∈ X ∪ Y ↔ x ∈ Y ∪ X\n\n\nTo understand the goal better, we’ll write out the definitions of the two sides of the biconditional. We use an extension of the define tactic that allows us to write out the definition of just a part of a given or the goal. The tactic define : x ∈ X ∪ Y will replace x ∈ X ∪ Y with its definition wherever it appears in the goal, and then define : x ∈ Y ∪ X will replace x ∈ Y ∪ X with its definition. (Note that define : X ∪ Y produces a result that is not as useful. It is usually best to define a complete statement rather than just a part of a statement. As usual, you can add at to do the replacements in a given rather than the goal.)\n\n\ntheorem union_comm {U : Type} (X Y : Set U) :\n X ∪ Y = Y ∪ X := by\n apply Set.ext\n fix x : U\n define : x ∈ X ∪ Y\n define : x ∈ Y ∪ X\n **done::\n\n\ncase h\nU : Type\nX Y : Set U\nx : U\n⊢ x ∈ X ∨ x ∈ Y ↔\n>> x ∈ Y ∨ x ∈ X\n\n\nBy the way, there are similar extensions of all of the tactics contrapos, demorgan, conditional, double_neg, bicond_neg, and quant_neg that allow you to use a logical equivalence to rewrite just a part of a formula. For example, if your goal is P ∧ (¬Q → R), then the tactic contrapos : ¬Q → R will change the goal to P ∧ (¬R → Q). If you have a given h : P → ¬∀ (x : U), Q x, then the tactic quant_neg : ¬∀ (x : U), Q x at h will change h to h : P → ∃ (x : U), ¬Q x.\nReturning to our proof of union_comm: the goal is now x ∈ X ∨ x ∈ Y ↔︎ x ∈ Y ∨ x ∈ X. You could prove this by a somewhat tedious application of the rules for biconditionals and disjunctions that were discussed in the last two sections, and we invite you to try it. But there is another possibility. The goal now has the form P ∨ Q ↔︎ Q ∨ P, which is the commutative law for “or” (see Section 1.2 of HTPI). We saw in a previous example that Lean has, in its library, the associative law for “and”; it is called and_assoc. Does Lean also know the commutative law for “or”?\nTry typing #check @or_ in VS Code. After a few seconds, a pop-up window appears with possible completions of this command. You will see or_assoc on the list, as well as or_comm. Select or_comm, and you’ll get this response: @or_comm : ∀ {a b : Prop}, a ∨ b ↔︎ b ∨ a. Since a and b are implicit arguments in this theorem, you can use or_comm to prove any statement of the form a ∨ b ↔︎ b ∨ a, where Lean will figure out for itself what a and b stand for. In particular, or_comm will prove our current goal.\n\n\ntheorem union_comm {U : Type} (X Y : Set U) :\n X ∪ Y = Y ∪ X := by\n apply Set.ext\n fix x : U\n define : x ∈ X ∪ Y\n define : x ∈ Y ∪ X\n show x ∈ X ∨ x ∈ Y ↔ x ∈ Y ∨ x ∈ X from or_comm\n done\n\n\nNo goals\n\n\nWe have now proven the two statements that were said to be “clearly” true in the proof in Example 3.6.2 of HTPI, and we have given them names. And that means that we can now use these theorems, in the file containing these proofs, to prove other theorems. As with any theorem in Lean’s library, you can use the #check command to confirm what these theorems say. If you type #check @empty_union and #check @union_comm, you will get these results:\n\n@empty_union : ∀ {U : Type} (B : Set U), ∅ ∪ B = B\n\n@union_comm : ∀ {U : Type} (X Y : Set U), X ∪ Y = Y ∪ X\n\nNotice that in both theorems we used curly braces when we introduced the type U, so it is an implicit argument and will not need to be specified when we apply the theorems. (Why did we decide to make U an implicit argument? Well, when we apply the theorem empty_union we will be specifying the set B, and when we apply union_comm we will be specifying the sets X and Y. Lean can figure out what U is by examining the types of these sets, so there is no need to specify it separately.)\nWe are finally ready to prove the theorem from Example 3.6.2. Here is the theorem:\n\n\ntheorem Example_3_6_2 (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U),\n A ∪ B = B := by\n\n **done::\n\n\nU : Type\n⊢ ∃! (A : Set U),\n>> ∀ (B : Set U),\n>> A ∪ B = B\n\n\nThe goal starts with ∃!, so we use our new tactic, exists_unique.\n\n\ntheorem Example_3_6_2 (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U),\n A ∪ B = B := by\n exists_unique\n **done::\n\n\ncase Existence\nU : Type\n⊢ ∃ (A : Set U),\n>> ∀ (B : Set U),\n>> A ∪ B = B\ncase Uniqueness\nU : Type\n⊢ ∀ (A_1 A_2 : Set U),\n>> (∀ (B : Set U),\n>> A_1 ∪ B = B) →\n>> (∀ (B : Set U),\n>> A_2 ∪ B = B) →\n>> A_1 = A_2\n\n\nWe have two goals, labeled Existence and Uniqueness. Imitating the proof from HTPI, we prove existence by using the value ∅ for A.\n\n\ntheorem Example_3_6_2 (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U),\n A ∪ B = B := by\n exists_unique\n · -- Existence\n apply Exists.intro ∅\n **done::\n · -- Uniqueness\n\n **done::\n done\n\n\ncase Existence\nU : Type\n⊢ ∀ (B : Set U),\n>> ∅ ∪ B = B\n\n\nThe goal is now precisely the statement of the theorem empty_union, so we can prove it by simply citing that theorem.\n\n\ntheorem Example_3_6_2 (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U),\n A ∪ B = B := by\n exists_unique\n · -- Existence\n apply Exists.intro ∅\n show ∀ (B : Set U), ∅ ∪ B = B from empty_union\n done\n · -- Uniqueness\n\n **done::\n done\n\n\ncase Uniqueness\nU : Type\n⊢ ∀ (A_1 A_2 : Set U),\n>> (∀ (B : Set U),\n>> A_1 ∪ B = B) →\n>> (∀ (B : Set U),\n>> A_2 ∪ B = B) →\n>> A_1 = A_2\n\n\nFor the uniqueness proof, we begin by introducing arbitrary sets C and D and assuming ∀ (B : Set U), C ∪ B = B and ∀ (B : Set U), D ∪ B = B, exactly as in the HTPI proof.\n\n\ntheorem Example_3_6_2 (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U),\n A ∪ B = B := by\n exists_unique\n · -- Existence\n apply Exists.intro ∅\n show ∀ (B : Set U), ∅ ∪ B = B from empty_union\n done\n · -- Uniqueness\n fix C : Set U; fix D : Set U\n assume h1 : ∀ (B : Set U), C ∪ B = B\n assume h2 : ∀ (B : Set U), D ∪ B = B\n **done::\n done\n\n\ncase Uniqueness\nU : Type\nC D : Set U\nh1 : ∀ (B : Set U),\n>> C ∪ B = B\nh2 : ∀ (B : Set U),\n>> D ∪ B = B\n⊢ C = D\n\n\nThe next step in HTPI was to apply h1 to D, and h2 to C. We do the same thing in Lean.\n\n\ntheorem Example_3_6_2 (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U),\n A ∪ B = B := by\n exists_unique\n · -- Existence\n apply Exists.intro ∅\n show ∀ (B : Set U), ∅ ∪ B = B from empty_union\n done\n · -- Uniqueness\n fix C : Set U; fix D : Set U\n assume h1 : ∀ (B : Set U), C ∪ B = B\n assume h2 : ∀ (B : Set U), D ∪ B = B\n have h3 : C ∪ D = D := h1 D\n have h4 : D ∪ C = C := h2 C \n **done::\n done\n\n\ncase Uniqueness\nU : Type\nC D : Set U\nh1 : ∀ (B : Set U),\n>> C ∪ B = B\nh2 : ∀ (B : Set U),\n>> D ∪ B = B\nh3 : C ∪ D = D\nh4 : D ∪ C = C\n⊢ C = D\n\n\nThe goal can now be achieved by stringing together a sequence of equations: C = D ∪ C = C ∪ D = D. The first of these equations is h4.symm—that is, h4 read backwards; the second follows from the commutative law for union; and the third is h3. We saw in Section 3.4 that you can prove a biconditional statement in Lean by stringing together a sequence of biconditionals in a calculational proof. Exactly the same method applies to equations. Here is the complete proof of the theorem:\ntheorem Example_3_6_2 (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U),\n A ∪ B = B := by\n exists_unique\n · -- Existence\n apply Exists.intro ∅\n show ∀ (B : Set U), ∅ ∪ B = B from empty_union\n done\n · -- Uniqueness\n fix C : Set U; fix D : Set U\n assume h1 : ∀ (B : Set U), C ∪ B = B\n assume h2 : ∀ (B : Set U), D ∪ B = B\n have h3 : C ∪ D = D := h1 D\n have h4 : D ∪ C = C := h2 C \n show C = D from\n calc C\n _ = D ∪ C := h4.symm\n _ = C ∪ D := union_comm D C\n _ = D := h3\n done\n done\nSince the statement ∃! (x : U), P x asserts both the existence and the uniqueness of an object satisfying the predicate P, we have the following strategy for using a given of this form (HTPI p. 159):\n\n\nTo use a given of the form ∃! (x : U), P x:\n\nIntroduce a new variable, say a, into the proof to stand for an object of type U for which P a is true. You may also assert that ∀ (x_1 x_2 : U), P x_1 → P x_2 → x_1 = x2.\n\nIf you have a given h : ∃! (x : U), P x, then the tactic obtain (a : U) (h1 : P a) (h2 : ∀ (x_1 x_2 : U), P x_1 → P x_2 → x_1 = x_2) from h will introduce into the tactic state a new variable a of type U and new givens (h1 : P a) and (h2 : ∀ (x_1 x_2 : U), P x_1 → P x_2 → x_1 = x_2). To illustrate the use of this tactic, let’s prove the theorem in Example 3.6.4 of HTPI.\n\n\ntheorem Example_3_6_4 (U : Type) (A B C : Set U)\n (h1 : ∃ (x : U), x ∈ A ∩ B)\n (h2 : ∃ (x : U), x ∈ A ∩ C)\n (h3 : ∃! (x : U), x ∈ A) :\n ∃ (x : U), x ∈ B ∩ C := by\n\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : ∃ (x : U),\n>> x ∈ A ∩ B\nh2 : ∃ (x : U),\n>> x ∈ A ∩ C\nh3 : ∃! (x : U), x ∈ A\n⊢ ∃ (x : U), x ∈ B ∩ C\n\n\nWe begin by applying the obtain tactic to h1, h2, and h3. In the case of h3, we get an extra given asserting the uniqueness of the element of A. We also write out the definitions of two of the new givens we obtain.\n\n\ntheorem Example_3_6_4 (U : Type) (A B C : Set U)\n (h1 : ∃ (x : U), x ∈ A ∩ B)\n (h2 : ∃ (x : U), x ∈ A ∩ C)\n (h3 : ∃! (x : U), x ∈ A) :\n ∃ (x : U), x ∈ B ∩ C := by\n obtain (b : U) (h4 : b ∈ A ∩ B) from h1\n obtain (c : U) (h5 : c ∈ A ∩ C) from h2\n obtain (a : U) (h6 : a ∈ A) (h7 : ∀ (y z : U),\n y ∈ A → z ∈ A → y = z) from h3\n define at h4; define at h5\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : ∃ (x : U),\n>> x ∈ A ∩ B\nh2 : ∃ (x : U),\n>> x ∈ A ∩ C\nh3 : ∃! (x : U), x ∈ A\nb : U\nh4 : b ∈ A ∧ b ∈ B\nc : U\nh5 : c ∈ A ∧ c ∈ C\na : U\nh6 : a ∈ A\nh7 : ∀ (y z : U),\n>> y ∈ A → z ∈ A → y = z\n⊢ ∃ (x : U), x ∈ B ∩ C\n\n\nThe key to the rest of the proof is the observation that, by the uniqueness of the element of A, b must be equal to c. To justify this conclusion, note that by two applications of universal instantiation, h7 b c is a proof of b ∈ A → c ∈ A → b = c, and therefore by two applications of modus ponens, h7 b c h4.left h5.left is a proof of b = c.\n\n\ntheorem Example_3_6_4 (U : Type) (A B C : Set U)\n (h1 : ∃ (x : U), x ∈ A ∩ B)\n (h2 : ∃ (x : U), x ∈ A ∩ C)\n (h3 : ∃! (x : U), x ∈ A) :\n ∃ (x : U), x ∈ B ∩ C := by\n obtain (b : U) (h4 : b ∈ A ∩ B) from h1\n obtain (c : U) (h5 : c ∈ A ∩ C) from h2\n obtain (a : U) (h6 : a ∈ A) (h7 : ∀ (y z : U),\n y ∈ A → z ∈ A → y = z) from h3\n define at h4; define at h5\n have h8 : b = c := h7 b c h4.left h5.left\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : ∃ (x : U),\n>> x ∈ A ∩ B\nh2 : ∃ (x : U),\n>> x ∈ A ∩ C\nh3 : ∃! (x : U), x ∈ A\nb : U\nh4 : b ∈ A ∧ b ∈ B\nc : U\nh5 : c ∈ A ∧ c ∈ C\na : U\nh6 : a ∈ A\nh7 : ∀ (y z : U),\n>> y ∈ A → z ∈ A → y = z\nh8 : b = c\n⊢ ∃ (x : U), x ∈ B ∩ C\n\n\nFor our next step, we will need a new tactic. Since we have h8 : b = c, we should be able to replace b with c anywhere it appears. The tactic that allows us to do this called rewrite. If h is a proof of any equation s = t, then rewrite [h] will replace all occurrences of s in the goal with t. Notice that it is the left side of the equation that is replaced with the right side; if you want the replacement to go in the other direction, so that t is replaced with s, you can use rewrite [←h]. (Alternatively, since h.symm is a proof of t = s, you can use rewrite [h.symm].) You can also apply the rewrite tactic to biconditional statements. If you have h : P ↔︎ Q, then rewrite [h] will cause all occurrences of P in the goal to be replaced with Q (and rewrite [←h] will replace Q with P).\nAs with many other tactics, you can add at h' to specify that the replacement should be done in the given h' rather than the goal. In our case, rewrite [h8] at h4 will change both occurrences of b in h4 to c.\n\n\ntheorem Example_3_6_4 (U : Type) (A B C : Set U)\n (h1 : ∃ (x : U), x ∈ A ∩ B)\n (h2 : ∃ (x : U), x ∈ A ∩ C)\n (h3 : ∃! (x : U), x ∈ A) :\n ∃ (x : U), x ∈ B ∩ C := by\n obtain (b : U) (h4 : b ∈ A ∩ B) from h1\n obtain (c : U) (h5 : c ∈ A ∩ C) from h2\n obtain (a : U) (h6 : a ∈ A) (h7 : ∀ (y z : U),\n y ∈ A → z ∈ A → y = z) from h3\n define at h4; define at h5\n have h8 : b = c := h7 b c h4.left h5.left\n rewrite [h8] at h4\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : ∃ (x : U),\n>> x ∈ A ∩ B\nh2 : ∃ (x : U),\n>> x ∈ A ∩ C\nh3 : ∃! (x : U), x ∈ A\nb c : U\nh4 : c ∈ A ∧ c ∈ B\nh5 : c ∈ A ∧ c ∈ C\na : U\nh6 : a ∈ A\nh7 : ∀ (y z : U),\n>> y ∈ A → z ∈ A → y = z\nh8 : b = c\n⊢ ∃ (x : U), x ∈ B ∩ C\n\n\nNow the right sides of h4 and h5 tell us that we can prove the goal by plugging in c for x. Here is the complete proof:\ntheorem Example_3_6_4 (U : Type) (A B C : Set U)\n (h1 : ∃ (x : U), x ∈ A ∩ B)\n (h2 : ∃ (x : U), x ∈ A ∩ C)\n (h3 : ∃! (x : U), x ∈ A) :\n ∃ (x : U), x ∈ B ∩ C := by\n obtain (b : U) (h4 : b ∈ A ∩ B) from h1\n obtain (c : U) (h5 : c ∈ A ∩ C) from h2\n obtain (a : U) (h6 : a ∈ A) (h7 : ∀ (y z : U),\n y ∈ A → z ∈ A → y = z) from h3\n define at h4; define at h5\n have h8 : b = c := h7 b c h4.left h5.left\n rewrite [h8] at h4\n show ∃ (x : U), x ∈ B ∩ C from\n Exists.intro c (And.intro h4.right h5.right)\n done\nYou might want to compare the Lean proof above to the proof of this theorem as it appears in HTPI (HTPI p. 160):\n\nSuppose \\(A\\), \\(B\\), and \\(C\\) are sets, \\(A\\) and \\(B\\) are not disjoint, \\(A\\) and \\(C\\) are not disjoint, and \\(A\\) has exactly one element. Then \\(B\\) and \\(C\\) are not disjoint\n\n\nProof. Since \\(A\\) and \\(B\\) are not disjoint, we can let \\(b\\) be something such that \\(b \\in A\\) and \\(b \\in B\\). Similarly, since \\(A\\) and \\(C\\) are not disjoint, there is some object \\(c\\) such that \\(c \\in A\\) and \\(c \\in C\\). Since \\(A\\) has only one element, we must have \\(b = c\\). Thus \\(b = c \\in B \\cap C\\) and therefore \\(B\\) and \\(C\\) are not disjoint.  □\n\nBefore ending this section, we return to the question of how you can tell if a theorem you want to use is in Lean’s library. In an earlier example, we guessed that the commutative law for “or” might be in Lean’s library, and we were then able to use the #check command to confirm it. But there is another technique that we could have used: the tactic apply?, which asks Lean to search through its library of theorems to see if there is one that could be applied to prove the goal. Let’s return to our proof of the theorem union_comm, which started like this:\n\n\ntheorem union_comm {U : Type} (X Y : Set U) :\n X ∪ Y = Y ∪ X := by\n apply Set.ext\n fix x : U\n define : x ∈ X ∪ Y\n define : x ∈ Y ∪ X\n **done::\n\n\ncase h\nU : Type\nX Y : Set U\nx : U\n⊢ x ∈ X ∨ x ∈ Y ↔\n>> x ∈ Y ∨ x ∈ X\n\n\nNow let’s give the apply? tactic a try.\ntheorem union_comm {U : Type} (X Y : Set U) :\n X ∪ Y = Y ∪ X := by\n apply Set.ext\n fix x : U\n define : x ∈ X ∪ Y\n define : x ∈ Y ∪ X\n ++apply?::\n done\nIt takes a few seconds for Lean to search its library of theorems, but eventually a blue squiggle appears under apply?, indicating that the tactic has produced an answer. You will find the answer in the Infoview pane: Try this: exact or_comm. The word exact is the name of a tactic that we have not discussed; it is a shorthand for show _ from, where the blank gets filled in with the goal. Thus, you can think of apply?’s answer as a shortened form of the tactic\n\nshow x ∈ X ∨ x ∈ Y ↔ x ∈ Y ∨ x ∈ X from or_comm\n\nOf course, this is precisely the step we used earlier to complete the proof.\nUsually your proof will be more readable if you use the show tactic to state explicitly the goal that is being proven. This also gives Lean a chance to correct you if you have become confused about what goal you are proving. But sometimes—for example, if the goal is very long—it is convenient to use the exact tactic instead. You might think of exact as meaning “the following is a term-mode proof that is exactly what is needed to prove the goal.”\nThe apply? tactic has not only come up with a suggested tactic, it has applied that tactic, and the proof is now complete. You can confirm that the tactic completes the proof by replacing the line apply? in the proof with apply?’s suggested exact tactic.\nThe apply? tactic is somewhat unpredictable; sometimes it is able to find the right theorem in the library, and sometimes it isn’t. But it is always worth a try. Another way to try to find theorems is to visit the documentation page for Lean’s mathematics library, which can be found at https://leanprover-community.github.io/mathlib4_docs/.\n\n\nExercises\n\ntheorem Exercise_3_4_15 (U : Type) (B : Set U) (F : Set (Set U)) :\n ⋃₀ {X : Set U | ∃ (A : Set U), A ∈ F ∧ X = A \\ B}\n ⊆ ⋃₀ (F \\ 𝒫 B) := sorry\n\n\ntheorem Exercise_3_5_9 (U : Type) (A B : Set U)\n (h1 : 𝒫 (A ∪ B) = 𝒫 A ∪ 𝒫 B) : A ⊆ B ∨ B ⊆ A := by\n --Hint: Start like this:\n have h2 : A ∪ B ∈ 𝒫 (A ∪ B) := sorry\n **done::\n\n\ntheorem Exercise_3_6_6b (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U), A ∪ B = A := sorry\n\n\ntheorem Exercise_3_6_7b (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U), A ∩ B = A := sorry\n\n\ntheorem Exercise_3_6_8a (U : Type) : ∀ (A : Set U),\n ∃! (B : Set U), ∀ (C : Set U), C \\ A = C ∩ B := sorry\n\n\ntheorem Exercise_3_6_10 (U : Type) (A : Set U)\n (h1 : ∀ (F : Set (Set U)), ⋃₀ F = A → A ∈ F) :\n ∃! (x : U), x ∈ A := by\n --Hint: Start like this:\n set F0 : Set (Set U) := {X : Set U | X ⊆ A ∧ ∃! (x : U), x ∈ X}\n --Now F0 is in the tactic state, with the definition above\n have h2 : ⋃₀ F0 = A := sorry\n \n **done::" + "text": "3.6. Existence and Uniqueness Proofs\nRecall that ∃! (x : U), P x means that there is exactly one x of type U such that P x is true. One way to deal with a given or goal of this form is to use the define tactic to rewrite it as the equivalent statement ∃ (x : U), P x ∧ ∀ (x_1 : U), P x_1 → x_1 = x. You can then apply techniques discussed previously in this chapter. However, there are also proof techniques, and corresponding Lean tactics, for working directly with givens and goals of this form.\nOften a goal of the form ∃! (x : U), P x is proven by using the following strategy. This is a slight rephrasing of the strategy presented in HTPI. The rephrasing is based on the fact that for any propositions A, B, and C, A ∧ B → C is equivalent to A → B → C (you can check this equivalence by making a truth table). The second of these statements is usually easier to work with in Lean than the first one, so we will often rephrase statements that have the form A ∧ B → C as A → B → C. To see why the second statement is easier to use, suppose that you have givens hA : A and hB : B. If you also have h : A → B → C, then h hA is a proof of B → C, and therefore h hA hB is a proof of C. If instead you had h' : (A ∧ B) → C, then to prove C you would have to write h' (And.intro hA hB), which is a bit less convenient.\nWith that preparation, here is our strategy for proving statements of the form ∃! (x : U), P x (HTPI pp. 156–157).\n\nTo prove a goal of the form ∃! (x : U), P x:\n\nProve ∃ (x : U), P x and ∀ (x_1 x_2 : U), P x_1 → P x_2 → x_1 = x_2. The first of these goals says that there exists an x such that P x is true, and the second says that it is unique. The two parts of the proof are therefore sometimes labeled existence and uniqueness.\n\nTo apply this strategy in a Lean proof, we use the tactic exists_unique. We’ll illustrate this with the theorem from Example 3.6.2 in HTPI. Here’s how that theorem and its proof are presented in HTPI (HTPI pp. 157–158):\n\nThere is a unique set \\(A\\) such that for every set \\(B\\), \\(A \\cup B = B\\).\n\n\nProof. Existence: Clearly \\(\\forall B(\\varnothing \\cup B = B)\\), so \\(\\varnothing\\) has the required property.\nUniqueness: Suppose \\(\\forall B(C \\cup B = B)\\) and \\(\\forall B(D \\cup B = B)\\). Applying the first of these assumptions to \\(D\\) we see that \\(C \\cup D = D\\), and applying the second to \\(C\\) we get \\(D \\cup C = C\\). But clearly \\(C \\cup D = D \\cup C\\), so \\(C = D\\).  □\n\nYou will notice that there are two statements in this proof that are described as “clearly” true. This brings up one of the difficulties with proving theorems in Lean: things that are clear to us are not necessarily clear to Lean! There are two ways to deal with such “clear” statements. The first is to see if the statement is in the library of theorems that Lean knows. The second is to prove the statement as a preliminary theorem that can then be used in the proof of our main theorem. We’ll take the second approach here, since proving these “clear” facts will give us more practice with Lean proofs, but later we’ll have more to say about searching for statements in Lean’s theorem library.\nThe first theorem we need says that for every set B, ∅ ∪ B = B, and it brings up a subtle issue: in Lean, the symbol ∅ is ambiguous! The reason for this is Lean’s strict typing rules. For each type U, there is an empty set of type Set U. There is, for example, the set of type Set Nat that contains no natural numbers, and also the set of type Set Real that contains no real numbers. To Lean, these are different sets, because they have different types. Which one does the symbol ∅ denote? The answer will be different in different contexts. Lean can often figure out from context which empty set you have in mind, but if it can’t, then you have to tell it explicitly by writing (∅ : Set U) rather than ∅. Fortunately, in our theorems Lean is able to figure out which empty set we have in mind.\nWith that preparation, we are ready to prove our first preliminary theorem. Since the goal is an equation between sets, our first step is to use the tactic apply Set.ext.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n **done::\n\n\ncase h\nU : Type\nB : Set U\n⊢ ∀ (x : U),\n>> x ∈ ∅ ∪ B ↔ x ∈ B\n\n\nBased on the form of the goal, our next two tactics should be fix x : U and apply Iff.intro. This leaves us with two goals, corresponding to the two directions of the biconditional, but we’ll focus first on just the left-to-right direction.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n fix x : U\n apply Iff.intro\n · -- (→)\n\n **done::\n · -- (←)\n\n **done::\n done\n\n\ncase h.mp\nU : Type\nB : Set U\nx : U\n⊢ x ∈ ∅ ∪ B → x ∈ B\n\n\nOf course, our next step is to assume x ∈ ∅ ∪ B. To help us see how to move forward, we also write out the definition of this assumption.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n fix x : U\n apply Iff.intro\n · -- (→)\n assume h1 : x ∈ ∅ ∪ B\n define at h1\n **done::\n · -- (←)\n\n **done::\n done\n\n\ncase h.mp\nU : Type\nB : Set U\nx : U\nh1 : x ∈ ∅ ∨ x ∈ B\n⊢ x ∈ B\n\n\nNow you should see a way to complete the proof: the statement x ∈ ∅ is false, so we should be able to apply the disjunctive syllogism rule to h1 to infer the goal x ∈ B. To carry out this plan, we’ll first have to prove x ∉ ∅. We’ll use the have tactic, and since there’s no obvious term-mode proof to justify it, we’ll try a tactic-mode proof.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n fix x : U\n apply Iff.intro\n · -- (→)\n assume h1 : x ∈ ∅ ∪ B\n define at h1\n have h2 : x ∉ ∅ := by\n\n **done::\n **done::\n · -- (←)\n\n **done::\n done\n\n\nU : Type\nB : Set U\nx : U\nh1 : x ∈ ∅ ∨ x ∈ B\n⊢ x ∉ ∅\n\n\nThe goal for our “proof within a proof” is a negative statement, so proof by contradiction seems like a good start.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n fix x : U\n apply Iff.intro\n · -- (→)\n assume h1 : x ∈ ∅ ∪ B\n define at h1\n have h2 : x ∉ ∅ := by\n by_contra h3\n **done::\n **done::\n · -- (←)\n\n **done::\n done\n\n\nU : Type\nB : Set U\nx : U\nh1 : x ∈ ∅ ∨ x ∈ B\nh3 : x ∈ ∅\n⊢ False\n\n\nTo see how to use the new assumption h3, we use the tactic define at h3. The definition Lean gives for the statement x ∈ ∅ is False. In other words, Lean knows that, by the definition of ∅, the statement x ∈ ∅ is false. Since False is our goal, this completes the inner proof, and we can return to the main proof.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n fix x : U\n apply Iff.intro\n · -- (→)\n assume h1 : x ∈ ∅ ∪ B\n define at h1\n have h2 : x ∉ ∅ := by\n by_contra h3\n define at h3 --h3 : False\n show False from h3\n done\n **done::\n · -- (←)\n\n **done::\n done\n\n\ncase h.mp\nU : Type\nB : Set U\nx : U\nh1 : x ∈ ∅ ∨ x ∈ B\nh2 : x ∉ ∅\n⊢ x ∈ B\n\n\nNow that we have established the claim h2 : x ∉ ∅, we can apply the disjunctive syllogism rule to h1 and h2 to reach the goal. This completes the left-to-right direction of the biconditional proof, so we move on to the right-to-left direction.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n fix x : U\n apply Iff.intro\n · -- (→)\n assume h1 : x ∈ ∅ ∪ B\n define at h1\n have h2 : x ∉ ∅ := by\n by_contra h3\n define at h3 --h3 : False\n show False from h3\n done\n disj_syll h1 h2 --h1 : x ∈ B\n show x ∈ B from h1\n done\n · -- (←)\n\n **done::\n done\n\n\ncase h.mpr\nU : Type\nB : Set U\nx : U\n⊢ x ∈ B → x ∈ ∅ ∪ B\n\n\nThis direction of the biconditional proof is easier: once we introduce the assumption h1 : x ∈ B, our goal will be x ∈ ∅ ∪ B, which means x ∈ ∅ ∨ x ∈ B, and we can prove it with the proof Or.inr h1.\n\n\ntheorem empty_union {U : Type} (B : Set U) :\n ∅ ∪ B = B := by\n apply Set.ext\n fix x : U\n apply Iff.intro\n · -- (→)\n assume h1 : x ∈ ∅ ∪ B\n define at h1\n have h2 : x ∉ ∅ := by\n by_contra h3\n define at h3 --h3 : False\n show False from h3\n done\n disj_syll h1 h2 --h1 : x ∈ B\n show x ∈ B from h1\n done\n · -- (←)\n assume h1 : x ∈ B\n show x ∈ ∅ ∪ B from Or.inr h1\n done\n done\n\n\nNo goals\n\n\nThe second fact that was called “clear” in the proof from Example 3.6.2 was the equation C ∪ D = D ∪ C. This looks like an instance of the commutativity of the union operator. Let’s prove that union is commutative.\n\n\ntheorem union_comm {U : Type} (X Y : Set U) :\n X ∪ Y = Y ∪ X := by\n \n **done::\n\n\nU : Type\nX Y : Set U\n⊢ X ∪ Y = Y ∪ X\n\n\nOnce again, we begin with apply Set.ext, which converts the goal to ∀ (x : U), x ∈ X ∪ Y ↔︎ x ∈ Y ∪ X, and then fix x : U.\n\n\ntheorem union_comm {U : Type} (X Y : Set U) :\n X ∪ Y = Y ∪ X := by\n apply Set.ext\n fix x : U\n **done::\n\n\ncase h\nU : Type\nX Y : Set U\nx : U\n⊢ x ∈ X ∪ Y ↔ x ∈ Y ∪ X\n\n\nTo understand the goal better, we’ll write out the definitions of the two sides of the biconditional. We use an extension of the define tactic that allows us to write out the definition of just a part of a given or the goal. The tactic define : x ∈ X ∪ Y will replace x ∈ X ∪ Y with its definition wherever it appears in the goal, and then define : x ∈ Y ∪ X will replace x ∈ Y ∪ X with its definition. (Note that define : X ∪ Y produces a result that is not as useful. It is usually best to define a complete statement rather than just a part of a statement. As usual, you can add at to do the replacements in a given rather than the goal.)\n\n\ntheorem union_comm {U : Type} (X Y : Set U) :\n X ∪ Y = Y ∪ X := by\n apply Set.ext\n fix x : U\n define : x ∈ X ∪ Y\n define : x ∈ Y ∪ X\n **done::\n\n\ncase h\nU : Type\nX Y : Set U\nx : U\n⊢ x ∈ X ∨ x ∈ Y ↔\n>> x ∈ Y ∨ x ∈ X\n\n\nBy the way, there are similar extensions of all of the tactics contrapos, demorgan, conditional, double_neg, bicond_neg, and quant_neg that allow you to use a logical equivalence to rewrite just a part of a formula. For example, if your goal is P ∧ (¬Q → R), then the tactic contrapos : ¬Q → R will change the goal to P ∧ (¬R → Q). If you have a given h : P → ¬∀ (x : U), Q x, then the tactic quant_neg : ¬∀ (x : U), Q x at h will change h to h : P → ∃ (x : U), ¬Q x.\nReturning to our proof of union_comm: the goal is now x ∈ X ∨ x ∈ Y ↔︎ x ∈ Y ∨ x ∈ X. You could prove this by a somewhat tedious application of the rules for biconditionals and disjunctions that were discussed in the last two sections, and we invite you to try it. But there is another possibility. The goal now has the form P ∨ Q ↔︎ Q ∨ P, which is the commutative law for “or” (see Section 1.2 of HTPI). We saw in a previous example that Lean has, in its library, the associative law for “and”; it is called and_assoc. Does Lean also know the commutative law for “or”?\nTry typing #check @or_ in VS Code. After a few seconds, a pop-up window appears with possible completions of this command. You will see or_assoc on the list, as well as or_comm. Select or_comm, and you’ll get this response: @or_comm : ∀ {a b : Prop}, a ∨ b ↔︎ b ∨ a. Since a and b are implicit arguments in this theorem, you can use or_comm to prove any statement of the form a ∨ b ↔︎ b ∨ a, where Lean will figure out for itself what a and b stand for. In particular, or_comm will prove our current goal.\n\n\ntheorem union_comm {U : Type} (X Y : Set U) :\n X ∪ Y = Y ∪ X := by\n apply Set.ext\n fix x : U\n define : x ∈ X ∪ Y\n define : x ∈ Y ∪ X\n show x ∈ X ∨ x ∈ Y ↔ x ∈ Y ∨ x ∈ X from or_comm\n done\n\n\nNo goals\n\n\nWe have now proven the two statements that were said to be “clearly” true in the proof in Example 3.6.2 of HTPI, and we have given them names. And that means that we can now use these theorems, in the file containing these proofs, to prove other theorems. As with any theorem in Lean’s library, you can use the #check command to confirm what these theorems say. If you type #check @empty_union and #check @union_comm, you will get these results:\n\n@empty_union : ∀ {U : Type} (B : Set U), ∅ ∪ B = B\n\n@union_comm : ∀ {U : Type} (X Y : Set U), X ∪ Y = Y ∪ X\n\nNotice that in both theorems we used curly braces when we introduced the type U, so it is an implicit argument and will not need to be specified when we apply the theorems. (Why did we decide to make U an implicit argument? Well, when we apply the theorem empty_union we will be specifying the set B, and when we apply union_comm we will be specifying the sets X and Y. Lean can figure out what U is by examining the types of these sets, so there is no need to specify it separately.)\nWe are finally ready to prove the theorem from Example 3.6.2. Here is the theorem:\n\n\ntheorem Example_3_6_2 (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U),\n A ∪ B = B := by\n\n **done::\n\n\nU : Type\n⊢ ∃! (A : Set U),\n>> ∀ (B : Set U),\n>> A ∪ B = B\n\n\nThe goal starts with ∃!, so we use our new tactic, exists_unique.\n\n\ntheorem Example_3_6_2 (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U),\n A ∪ B = B := by\n exists_unique\n **done::\n\n\ncase Existence\nU : Type\n⊢ ∃ (A : Set U),\n>> ∀ (B : Set U),\n>> A ∪ B = B\ncase Uniqueness\nU : Type\n⊢ ∀ (A_1 A_2 : Set U),\n>> (∀ (B : Set U),\n>> A_1 ∪ B = B) →\n>> (∀ (B : Set U),\n>> A_2 ∪ B = B) →\n>> A_1 = A_2\n\n\nWe have two goals, labeled Existence and Uniqueness. Imitating the proof from HTPI, we prove existence by using the value ∅ for A.\n\n\ntheorem Example_3_6_2 (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U),\n A ∪ B = B := by\n exists_unique\n · -- Existence\n apply Exists.intro ∅\n **done::\n · -- Uniqueness\n\n **done::\n done\n\n\ncase Existence\nU : Type\n⊢ ∀ (B : Set U),\n>> ∅ ∪ B = B\n\n\nThe goal is now precisely the statement of the theorem empty_union, so we can prove it by simply citing that theorem.\n\n\ntheorem Example_3_6_2 (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U),\n A ∪ B = B := by\n exists_unique\n · -- Existence\n apply Exists.intro ∅\n show ∀ (B : Set U), ∅ ∪ B = B from empty_union\n done\n · -- Uniqueness\n\n **done::\n done\n\n\ncase Uniqueness\nU : Type\n⊢ ∀ (A_1 A_2 : Set U),\n>> (∀ (B : Set U),\n>> A_1 ∪ B = B) →\n>> (∀ (B : Set U),\n>> A_2 ∪ B = B) →\n>> A_1 = A_2\n\n\nFor the uniqueness proof, we begin by introducing arbitrary sets C and D and assuming ∀ (B : Set U), C ∪ B = B and ∀ (B : Set U), D ∪ B = B, exactly as in the HTPI proof.\n\n\ntheorem Example_3_6_2 (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U),\n A ∪ B = B := by\n exists_unique\n · -- Existence\n apply Exists.intro ∅\n show ∀ (B : Set U), ∅ ∪ B = B from empty_union\n done\n · -- Uniqueness\n fix C : Set U; fix D : Set U\n assume h1 : ∀ (B : Set U), C ∪ B = B\n assume h2 : ∀ (B : Set U), D ∪ B = B\n **done::\n done\n\n\ncase Uniqueness\nU : Type\nC D : Set U\nh1 : ∀ (B : Set U),\n>> C ∪ B = B\nh2 : ∀ (B : Set U),\n>> D ∪ B = B\n⊢ C = D\n\n\nThe next step in HTPI was to apply h1 to D, and h2 to C. We do the same thing in Lean.\n\n\ntheorem Example_3_6_2 (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U),\n A ∪ B = B := by\n exists_unique\n · -- Existence\n apply Exists.intro ∅\n show ∀ (B : Set U), ∅ ∪ B = B from empty_union\n done\n · -- Uniqueness\n fix C : Set U; fix D : Set U\n assume h1 : ∀ (B : Set U), C ∪ B = B\n assume h2 : ∀ (B : Set U), D ∪ B = B\n have h3 : C ∪ D = D := h1 D\n have h4 : D ∪ C = C := h2 C \n **done::\n done\n\n\ncase Uniqueness\nU : Type\nC D : Set U\nh1 : ∀ (B : Set U),\n>> C ∪ B = B\nh2 : ∀ (B : Set U),\n>> D ∪ B = B\nh3 : C ∪ D = D\nh4 : D ∪ C = C\n⊢ C = D\n\n\nThe goal can now be achieved by stringing together a sequence of equations: C = D ∪ C = C ∪ D = D. The first of these equations is h4.symm—that is, h4 read backwards; the second follows from the commutative law for union; and the third is h3. We saw in Section 3.4 that you can prove a biconditional statement in Lean by stringing together a sequence of biconditionals in a calculational proof. Exactly the same method applies to equations. Here is the complete proof of the theorem:\ntheorem Example_3_6_2 (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U),\n A ∪ B = B := by\n exists_unique\n · -- Existence\n apply Exists.intro ∅\n show ∀ (B : Set U), ∅ ∪ B = B from empty_union\n done\n · -- Uniqueness\n fix C : Set U; fix D : Set U\n assume h1 : ∀ (B : Set U), C ∪ B = B\n assume h2 : ∀ (B : Set U), D ∪ B = B\n have h3 : C ∪ D = D := h1 D\n have h4 : D ∪ C = C := h2 C \n show C = D from\n calc C\n _ = D ∪ C := h4.symm\n _ = C ∪ D := union_comm D C\n _ = D := h3\n done\n done\nSince the statement ∃! (x : U), P x asserts both the existence and the uniqueness of an object satisfying the predicate P, we have the following strategy for using a given of this form (HTPI p. 159):\n\n\nTo use a given of the form ∃! (x : U), P x:\n\nIntroduce a new variable, say a, into the proof to stand for an object of type U for which P a is true. You may also assert that ∀ (x_1 x_2 : U), P x_1 → P x_2 → x_1 = x2.\n\nIf you have a given h : ∃! (x : U), P x, then the tactic obtain (a : U) (h1 : P a) (h2 : ∀ (x_1 x_2 : U), P x_1 → P x_2 → x_1 = x_2) from h will introduce into the tactic state a new variable a of type U and new givens (h1 : P a) and (h2 : ∀ (x_1 x_2 : U), P x_1 → P x_2 → x_1 = x_2). To illustrate the use of this tactic, let’s prove the theorem in Example 3.6.4 of HTPI.\n\n\ntheorem Example_3_6_4 (U : Type) (A B C : Set U)\n (h1 : ∃ (x : U), x ∈ A ∩ B)\n (h2 : ∃ (x : U), x ∈ A ∩ C)\n (h3 : ∃! (x : U), x ∈ A) :\n ∃ (x : U), x ∈ B ∩ C := by\n\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : ∃ (x : U),\n>> x ∈ A ∩ B\nh2 : ∃ (x : U),\n>> x ∈ A ∩ C\nh3 : ∃! (x : U), x ∈ A\n⊢ ∃ (x : U), x ∈ B ∩ C\n\n\nWe begin by applying the obtain tactic to h1, h2, and h3. In the case of h3, we get an extra given asserting the uniqueness of the element of A. We also write out the definitions of two of the new givens we obtain.\n\n\ntheorem Example_3_6_4 (U : Type) (A B C : Set U)\n (h1 : ∃ (x : U), x ∈ A ∩ B)\n (h2 : ∃ (x : U), x ∈ A ∩ C)\n (h3 : ∃! (x : U), x ∈ A) :\n ∃ (x : U), x ∈ B ∩ C := by\n obtain (b : U) (h4 : b ∈ A ∩ B) from h1\n obtain (c : U) (h5 : c ∈ A ∩ C) from h2\n obtain (a : U) (h6 : a ∈ A) (h7 : ∀ (y z : U),\n y ∈ A → z ∈ A → y = z) from h3\n define at h4; define at h5\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : ∃ (x : U),\n>> x ∈ A ∩ B\nh2 : ∃ (x : U),\n>> x ∈ A ∩ C\nh3 : ∃! (x : U), x ∈ A\nb : U\nh4 : b ∈ A ∧ b ∈ B\nc : U\nh5 : c ∈ A ∧ c ∈ C\na : U\nh6 : a ∈ A\nh7 : ∀ (y z : U),\n>> y ∈ A → z ∈ A → y = z\n⊢ ∃ (x : U), x ∈ B ∩ C\n\n\nThe key to the rest of the proof is the observation that, by the uniqueness of the element of A, b must be equal to c. To justify this conclusion, note that by two applications of universal instantiation, h7 b c is a proof of b ∈ A → c ∈ A → b = c, and therefore by two applications of modus ponens, h7 b c h4.left h5.left is a proof of b = c.\n\n\ntheorem Example_3_6_4 (U : Type) (A B C : Set U)\n (h1 : ∃ (x : U), x ∈ A ∩ B)\n (h2 : ∃ (x : U), x ∈ A ∩ C)\n (h3 : ∃! (x : U), x ∈ A) :\n ∃ (x : U), x ∈ B ∩ C := by\n obtain (b : U) (h4 : b ∈ A ∩ B) from h1\n obtain (c : U) (h5 : c ∈ A ∩ C) from h2\n obtain (a : U) (h6 : a ∈ A) (h7 : ∀ (y z : U),\n y ∈ A → z ∈ A → y = z) from h3\n define at h4; define at h5\n have h8 : b = c := h7 b c h4.left h5.left\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : ∃ (x : U),\n>> x ∈ A ∩ B\nh2 : ∃ (x : U),\n>> x ∈ A ∩ C\nh3 : ∃! (x : U), x ∈ A\nb : U\nh4 : b ∈ A ∧ b ∈ B\nc : U\nh5 : c ∈ A ∧ c ∈ C\na : U\nh6 : a ∈ A\nh7 : ∀ (y z : U),\n>> y ∈ A → z ∈ A → y = z\nh8 : b = c\n⊢ ∃ (x : U), x ∈ B ∩ C\n\n\nFor our next step, we will need a new tactic. Since we have h8 : b = c, we should be able to replace b with c anywhere it appears. The tactic that allows us to do this called rewrite. If h is a proof of any equation s = t, then rewrite [h] will replace all occurrences of s in the goal with t. Notice that it is the left side of the equation that is replaced with the right side; if you want the replacement to go in the other direction, so that t is replaced with s, you can use rewrite [←h]. (Alternatively, since h.symm is a proof of t = s, you can use rewrite [h.symm].) You can also apply the rewrite tactic to biconditional statements. If you have h : P ↔︎ Q, then rewrite [h] will cause all occurrences of P in the goal to be replaced with Q (and rewrite [←h] will replace Q with P).\nAs with many other tactics, you can add at h' to specify that the replacement should be done in the given h' rather than the goal. In our case, rewrite [h8] at h4 will change both occurrences of b in h4 to c.\n\n\ntheorem Example_3_6_4 (U : Type) (A B C : Set U)\n (h1 : ∃ (x : U), x ∈ A ∩ B)\n (h2 : ∃ (x : U), x ∈ A ∩ C)\n (h3 : ∃! (x : U), x ∈ A) :\n ∃ (x : U), x ∈ B ∩ C := by\n obtain (b : U) (h4 : b ∈ A ∩ B) from h1\n obtain (c : U) (h5 : c ∈ A ∩ C) from h2\n obtain (a : U) (h6 : a ∈ A) (h7 : ∀ (y z : U),\n y ∈ A → z ∈ A → y = z) from h3\n define at h4; define at h5\n have h8 : b = c := h7 b c h4.left h5.left\n rewrite [h8] at h4\n **done::\n\n\nU : Type\nA B C : Set U\nh1 : ∃ (x : U),\n>> x ∈ A ∩ B\nh2 : ∃ (x : U),\n>> x ∈ A ∩ C\nh3 : ∃! (x : U), x ∈ A\nb c : U\nh4 : c ∈ A ∧ c ∈ B\nh5 : c ∈ A ∧ c ∈ C\na : U\nh6 : a ∈ A\nh7 : ∀ (y z : U),\n>> y ∈ A → z ∈ A → y = z\nh8 : b = c\n⊢ ∃ (x : U), x ∈ B ∩ C\n\n\nNow the right sides of h4 and h5 tell us that we can prove the goal by plugging in c for x. Here is the complete proof:\ntheorem Example_3_6_4 (U : Type) (A B C : Set U)\n (h1 : ∃ (x : U), x ∈ A ∩ B)\n (h2 : ∃ (x : U), x ∈ A ∩ C)\n (h3 : ∃! (x : U), x ∈ A) :\n ∃ (x : U), x ∈ B ∩ C := by\n obtain (b : U) (h4 : b ∈ A ∩ B) from h1\n obtain (c : U) (h5 : c ∈ A ∩ C) from h2\n obtain (a : U) (h6 : a ∈ A) (h7 : ∀ (y z : U),\n y ∈ A → z ∈ A → y = z) from h3\n define at h4; define at h5\n have h8 : b = c := h7 b c h4.left h5.left\n rewrite [h8] at h4\n show ∃ (x : U), x ∈ B ∩ C from\n Exists.intro c (And.intro h4.right h5.right)\n done\nYou might want to compare the Lean proof above to the proof of this theorem as it appears in HTPI (HTPI p. 160):\n\nSuppose \\(A\\), \\(B\\), and \\(C\\) are sets, \\(A\\) and \\(B\\) are not disjoint, \\(A\\) and \\(C\\) are not disjoint, and \\(A\\) has exactly one element. Then \\(B\\) and \\(C\\) are not disjoint\n\n\nProof. Since \\(A\\) and \\(B\\) are not disjoint, we can let \\(b\\) be something such that \\(b \\in A\\) and \\(b \\in B\\). Similarly, since \\(A\\) and \\(C\\) are not disjoint, there is some object \\(c\\) such that \\(c \\in A\\) and \\(c \\in C\\). Since \\(A\\) has only one element, we must have \\(b = c\\). Thus \\(b = c \\in B \\cap C\\) and therefore \\(B\\) and \\(C\\) are not disjoint.  □\n\nBefore ending this section, we return to the question of how you can tell if a theorem you want to use is in Lean’s library. In an earlier example, we guessed that the commutative law for “or” might be in Lean’s library, and we were then able to use the #check command to confirm it. But there is another technique that we could have used: the tactic apply?, which asks Lean to search through its library of theorems to see if there is one that could be applied to prove the goal. Let’s return to our proof of the theorem union_comm, which started like this:\n\n\ntheorem union_comm {U : Type} (X Y : Set U) :\n X ∪ Y = Y ∪ X := by\n apply Set.ext\n fix x : U\n define : x ∈ X ∪ Y\n define : x ∈ Y ∪ X\n **done::\n\n\ncase h\nU : Type\nX Y : Set U\nx : U\n⊢ x ∈ X ∨ x ∈ Y ↔\n>> x ∈ Y ∨ x ∈ X\n\n\nNow let’s give the apply? tactic a try.\ntheorem union_comm {U : Type} (X Y : Set U) :\n X ∪ Y = Y ∪ X := by\n apply Set.ext\n fix x : U\n define : x ∈ X ∪ Y\n define : x ∈ Y ∪ X\n ++apply?::\n done\nIt takes a few seconds for Lean to search its library of theorems, but eventually a blue squiggle appears under apply?, indicating that the tactic has produced an answer. You will find the answer in the Infoview pane: Try this: exact Or.comm. The word exact is the name of a tactic that we have not discussed; it is a shorthand for show _ from, where the blank gets filled in with the goal. Thus, you can think of apply?’s answer as a shortened form of the tactic\n\nshow x ∈ X ∨ x ∈ Y ↔ x ∈ Y ∨ x ∈ X from Or.comm\n\nThe command #check @Or.comm will tell you that Or.comm is just an alternative name for the theorem or_comm. So the step suggested by the apply? tactic is essentially the same as the step we used earlier to complete the proof.\nUsually your proof will be more readable if you use the show tactic to state explicitly the goal that is being proven. This also gives Lean a chance to correct you if you have become confused about what goal you are proving. But sometimes—for example, if the goal is very long—it is convenient to use the exact tactic instead. You might think of exact as meaning “the following is a term-mode proof that is exactly what is needed to prove the goal.”\nThe apply? tactic has not only come up with a suggested tactic, it has applied that tactic, and the proof is now complete. You can confirm that the tactic completes the proof by replacing the line apply? in the proof with apply?’s suggested exact tactic.\nThe apply? tactic is somewhat unpredictable; sometimes it is able to find the right theorem in the library, and sometimes it isn’t. But it is always worth a try. Another way to try to find theorems is to visit the documentation page for Lean’s mathematics library, which can be found at https://leanprover-community.github.io/mathlib4_docs/.\n\n\nExercises\n\ntheorem Exercise_3_4_15 (U : Type) (B : Set U) (F : Set (Set U)) :\n ⋃₀ {X : Set U | ∃ (A : Set U), A ∈ F ∧ X = A \\ B}\n ⊆ ⋃₀ (F \\ 𝒫 B) := sorry\n\n\ntheorem Exercise_3_5_9 (U : Type) (A B : Set U)\n (h1 : 𝒫 (A ∪ B) = 𝒫 A ∪ 𝒫 B) : A ⊆ B ∨ B ⊆ A := by\n --Hint: Start like this:\n have h2 : A ∪ B ∈ 𝒫 (A ∪ B) := sorry\n \n **done::\n\n\ntheorem Exercise_3_6_6b (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U), A ∪ B = A := sorry\n\n\ntheorem Exercise_3_6_7b (U : Type) :\n ∃! (A : Set U), ∀ (B : Set U), A ∩ B = A := sorry\n\n\ntheorem Exercise_3_6_8a (U : Type) : ∀ (A : Set U),\n ∃! (B : Set U), ∀ (C : Set U), C \\ A = C ∩ B := sorry\n\n\ntheorem Exercise_3_6_10 (U : Type) (A : Set U)\n (h1 : ∀ (F : Set (Set U)), ⋃₀ F = A → A ∈ F) :\n ∃! (x : U), x ∈ A := by\n --Hint: Start like this:\n set F0 : Set (Set U) := {X : Set U | X ⊆ A ∧ ∃! (x : U), x ∈ X}\n --Now F0 is in the tactic state, with the definition above\n have h2 : ⋃₀ F0 = A := sorry\n \n **done::" }, { "objectID": "Chap3.html#more-examples-of-proofs", @@ -158,7 +158,7 @@ "href": "Chap4.html", "title": "4  Relations", "section": "", - "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\bigtriangleup}\n$$" + "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\mathbin{∆}}\n$$" }, { "objectID": "Chap4.html#ordered-pairs-and-cartesian-products", @@ -200,7 +200,7 @@ "href": "Chap5.html", "title": "5  Functions", "section": "", - "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\bigtriangleup}\n$$" + "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\mathbin{∆}}\n$$" }, { "objectID": "Chap5.html#functions", @@ -242,7 +242,7 @@ "href": "Chap6.html", "title": "6  Mathematical Induction", "section": "", - "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\bigtriangleup}\n$$" + "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\mathbin{∆}}\n$$" }, { "objectID": "Chap6.html#proof-by-mathematical-induction", @@ -263,14 +263,14 @@ "href": "Chap6.html#recursion", "title": "6  Mathematical Induction", "section": "6.3. Recursion", - "text": "6.3. Recursion\nIn the last two sections, we saw that we can prove that all natural numbers have some property by proving that 0 has the property, and also that for every natural number \\(n\\), if \\(n\\) has the property then so does \\(n + 1\\). In this section we will see that a similar idea can be used to define a function whose domain is the natural numbers. We can define a function \\(f\\) with domain \\(\\mathbb{N}\\) by specifying the value of \\(f(0)\\), and also saying how to compute \\(f(n+1)\\) if you already know the value of \\(f(n)\\).\nFor example, we can define a function \\(f : \\mathbb{N} \\to \\mathbb{N}\\) as follows:\n\n\\(f(0) = 1\\); for every \\(n \\in \\mathbb{N}\\), \\(f(n+1) = (n+1) \\cdot f(n)\\).\n\nHere is the same definition written in Lean. (For reasons that will become clear shortly, we have given the function the name fact.)\ndef fact (k : Nat) : Nat :=\n match k with\n | 0 => 1\n | n + 1 => (n + 1) * fact n\nLean can use this definition to compute fact k for any natural number k. The match statement tells Lean to try to match the input k with one of the two patterns 0 and n + 1, and then to use the corresponding formula after => to compute fact k. For example, if we ask Lean for fact 4, it first checks if 4 matches 0. Since it doesn’t, it goes on to the next line and determines that 4 matches the pattern n + 1, with n = 3, so it uses the formula fact 4 = 4 * fact 3. Of course, now it must compute fact 3, which it does in the same way: 3 matches n + 1 with n = 2, so fact 3 = 3 * fact 2. Continuing in this way, Lean determines that\n\nfact 4 = 4 * fact 3 = 4 * (3 * fact 2) = 4 * (3 * (2 * fact 1))\n = 4 * (3 * (2 * (1 * fact 0))) = 4 * (3 * (2 * (1 * 1))) = 24.\n\nYou can confirm this with the #eval command:\n#eval fact 4 --Answer: 24\nOf course, by now you have probably guessed why we used the name fact for his function: fact k is k factorial—the product of all the numbers from 1 to k.\nThis style of definition is called a recursive definition. If a function is defined by a recursive definition, then theorems about that function are often most easily proven by induction. For example, here is a theorem about the factorial function. It is Example 6.3.1 in HTPI, and we begin the Lean proof by imitating the proof in HTPI.\ntheorem ??Example_6_3_1:: : ∀ n ≥ 4, fact n > 2 ^ n := by\n by_induc\n · -- Base Case\n decide\n done\n · -- Induction Step\n fix n : Nat\n assume h1 : n ≥ 4\n assume ih : fact n > 2 ^ n\n show fact (n + 1) > 2 ^ (n + 1) from\n calc fact (n + 1)\n _ = (n + 1) * fact n := by rfl\n _ > (n + 1) * 2 ^ n := sorry\n _ > 2 * 2 ^ n := sorry\n _ = 2 ^ (n + 1) := by ring\n done\n done\nThere are two steps in the calculational proof at the end that require justification. The first says that (n + 1) * fact n > (n + 1) * 2 ^ n, which should follow from the inductive hypothesis ih : fact n > 2 ^ n by multiplying both sides by n + 1. Is there a theorem that would justify this inference?\nThis may remind you of a step in Example_6_1_3 where we used the theorem Nat.mul_le_mul_right, which says ∀ {n m : ℕ} (k : ℕ), n ≤ m → n * k ≤ m * k. Our situation in this example is similar, but it involves a strict inequality (> rather than ≥) and it involves multiplying on the left rather than the right. Many theorems about inequalities in Lean’s library contain either le (for “less than or equal to”) or lt (for “less than”) in their names, but they can also be used to prove statements involving ≥ or >. Perhaps the theorem we need is named something like Nat.mul_lt_mul_left. If you type #check @Nat.mul_lt_mul_ into VS Code, a pop-up window will appear listing several theorems that begin with Nat.mul_lt_mul_. There is no Nat.mul_lt_mul_left, but there is a theorem called Nat.mul_lt_mul_of_pos_left, and its meaning is\n\n@Nat.mul_lt_mul_of_pos_left : ∀ {n m k : ℕ},\n n < m → k > 0 → k * n < k * m\n\nLean has correctly reminded us that, to multiply both sides of a strict inequality by a number k, we need to know that k > 0. So in our case, we’ll need to prove that n + 1 > 0. Once we have that, we can use the theorem Nat.mul_lt_mul_of_pos_left to eliminate the first sorry.\nThe second sorry is similar: (n + 1) * 2 ^ n > 2 * 2 ^ n should follow from n + 1 > 2 and 2 ^ n > 0, and you can verify that the theorem that will justify this inference is Nat.mul_lt_mul_of_pos_right.\nSo we have three inequalities that we need to prove before we can justify the steps of the calculational proof: n + 1 > 0, n + 1 > 2, and 2 ^ n > 0. We’ll insert have steps before the calculational proof to assert these three inequalities. If you try it, you’ll find that linarith can prove the first two, but not the third.\nHow can we prove 2 ^ n > 0? It is often helpful to think about whether there is a general principle that is behind a statement we are trying to prove. In our case, the inequality 2 ^ n > 0 is an instance of the general fact that if m and n are any natural numbers with m > 0, then m ^ n > 0. Maybe that fact is in Lean’s library:\nexample (m n : Nat) (h : m > 0) : m ^ n > 0 := by ++apply?::\nThe apply? tactic comes up with exact Nat.pos_pow_of_pos n h, and #check @pos_pow_of_pos gives the result\n\n@Nat.pos_pow_of_pos : ∀ {n : ℕ} (m : ℕ), 0 < n → 0 < n ^ m\n\nThat means that we can use Nat.pos_pow_of_pos to prove 2 ^ n > 0, but first we’ll need to prove that 2 > 0. We now have all the pieces we need; putting them together leads to this proof:\ntheorem Example_6_3_1 : ∀ n ≥ 4, fact n > 2 ^ n := by\n by_induc\n · -- Base Case\n decide\n done\n · -- Induction Step\n fix n : Nat\n assume h1 : n ≥ 4\n assume ih : fact n > 2 ^ n\n have h2 : n + 1 > 0 := by linarith\n have h3 : n + 1 > 2 := by linarith\n have h4 : 2 > 0 := by linarith\n have h5 : 2 ^ n > 0 := Nat.pos_pow_of_pos n h4\n show fact (n + 1) > 2 ^ (n + 1) from\n calc fact (n + 1)\n _ = (n + 1) * fact n := by rfl\n _ > (n + 1) * 2 ^ n := Nat.mul_lt_mul_of_pos_left ih h2\n _ > 2 * 2 ^ n := Nat.mul_lt_mul_of_pos_right h3 h5\n _ = 2 ^ (n + 1) := by ring\n done\n done\nBut there is an easier way. Look at the two “>” steps in the calculational proof at the end of Example_6_3_1. In both cases, we took a known relationship between two quantities and did something to both sides that preserved the relationship. In the first case, the known relationship was ih : fact n > 2 ^ n, and we multiplied both sides by n + 1 on the left; in the second, the known relationship was h3 : n + 1 > 2, and we multiplied both sides by 2 ^ n on the right. To justify these steps, we had to find the right theorems in Lean’s library, and we ended up needing auxiliary positivity facts: h2 : n + 1 > 0 in the first case and h5 : 2 ^ n > 0 in the second. There is a tactic that can simplify these steps: if h is a proof of a statement asserting a relationship between two quantities, then the tactic rel [h] will attempt to prove any statement obtained from that relationship by applying the same operation to both sides. The tactic will try to find a theorem in Lean’s library that says that the operation preserves the relationship, and if the theorem requires auxiliary positivity facts, it will try to prove those facts as well. The rel tactic doesn’t always succeed, but when it does, it saves you the trouble of searching through the library for the necessary theorems. In this case, the tactic allows us to give a much simpler proof of Example_6_3_1:\ntheorem Example_6_3_1 : ∀ n ≥ 4, fact n > 2 ^ n := by\n by_induc\n · -- Base Case\n decide\n done\n · -- Induction Step\n fix n : Nat\n assume h1 : n ≥ 4\n assume ih : fact n > 2 ^ n\n have h2 : n + 1 > 2 := by linarith\n show fact (n + 1) > 2 ^ (n + 1) from\n calc fact (n + 1)\n _ = (n + 1) * fact n := by rfl\n _ > (n + 1) * 2 ^ n := by rel [ih]\n _ > 2 * 2 ^ n := by rel [h2]\n _ = 2 ^ (n + 1) := by ring\n done\n done\nThe next example in HTPI is a proof of one of the laws of exponents: a ^ (m + n) = a ^ m * a ^ n. Lean’s definition of exponentiation with natural number exponents is recursive. For some reason, the definitions are slightly different for different kinds of bases. The definitions Lean uses are essentially as follows:\n--For natural numbers b and k, b ^ k = nat_pow b k:\ndef nat_pow (b k : Nat) : Nat :=\n match k with\n | 0 => 1\n | n + 1 => (nat_pow b n) * b\n\n--For a real number b and a natural number k, b ^ k = real_pow b k:\ndef real_pow (b : Real) (k : Nat) : Real :=\n match k with\n | 0 => 1\n | n + 1 => b * (real_pow b n)\nLet’s prove the addition law for exponents:\ntheorem Example_6_3_2_cheating : ∀ (a : Real) (m n : Nat),\n a ^ (m + n) = a ^ m * a ^ n := by\n fix a : Real; fix m : Nat; fix n : Nat\n ring\n done\nWell, that wasn’t really fair. The ring tactic knows the laws of exponents, so it has no trouble proving this theorem. But we want to know why the law holds, so let’s see if we can prove it without using ring. The following proof is essentially the same as the proof in HTPI:\ntheorem Example_6_3_2 : ∀ (a : Real) (m n : Nat),\n a ^ (m + n) = a ^ m * a ^ n := by\n fix a : Real; fix m : Nat\n --Goal : ∀ (n : Nat), a ^ (m + n) = a ^ m * a ^ n\n by_induc\n · -- Base Case\n show a ^ (m + 0) = a ^ m * a ^ 0 from\n calc a ^ (m + 0)\n _ = a ^ m := by rfl\n _ = a ^ m * 1 := (mul_one (a ^ m)).symm\n _ = a ^ m * a ^ 0 := by rfl\n done\n · -- Induction Step\n fix n : Nat\n assume ih : a ^ (m + n) = a ^ m * a ^ n\n show a ^ (m + (n + 1)) = a ^ m * a ^ (n + 1) from\n calc a ^ (m + (n + 1))\n _ = a ^ ((m + n) + 1) := by rw [add_assoc]\n _ = a * a ^ (m + n) := by rfl\n _ = a * (a ^ m * a ^ n) := by rw [ih]\n _ = a ^ m * (a * a ^ n) := by\n rw [←mul_assoc, mul_comm a, mul_assoc]\n _ = a ^ m * (a ^ (n + 1)) := by rfl\n done\n done\nFinally, we’ll prove the theorem in Example 6.3.4 of HTPI, which again involves exponentiation with natural number exponents. Here’s the beginning of the proof:\n\n\ntheorem Example_6_3_4 : ∀ (x : Real), x > -1 →\n ∀ (n : Nat), (1 + x) ^ n ≥ 1 + n * x := by\n fix x : Real\n assume h1 : x > -1\n **done::\n\n\nx : ℝ\nh1 : x > -1\n⊢ ∀ (n : ℕ),\n>> (1 + x) ^ n ≥\n>> 1 + ↑n * x\n\n\nLook carefully at the goal in the tactic state. Why is there a ↑ before the last n? The reason has to do with types. The variable x has type Real and n has type Nat, so how can Lean multiply n by x? Remember, in Lean, the natural numbers are not a subset of the real numbers. The two types are completely separate, but for each natural number, there is a corresponding real number. To multiply n by x, Lean had to convert n to the corresponding real number, through a process called coercion. The notation ↑n denotes the result of coercing (or casting) n to another type—in this case, Real. Since ↑n and x are both real numbers, Lean can use the multiplication operation on the real numbers to multiply them. (To type ↑ in VSCode, type \\uparrow, or just \\u.)\nAs we will see, the need for coercion in this example will make the proof a bit more complicated, because we’ll need to use some theorems about coercions. Theorems about coercion of natural numbers to some other type often have names that start Nat.cast.\nContinuing with the proof, since exponentiation is defined recursively, let’s try mathematical induction:\n\n\ntheorem Example_6_3_4 : ∀ (x : Real), x > -1 →\n ∀ (n : Nat), (1 + x) ^ n ≥ 1 + n * x := by\n fix x : Real\n assume h1 : x > -1\n by_induc\n · -- Base Case\n\n **done::\n · -- Induction Step\n\n **done:: \n done\n\n\ncase Base_Case\nx : ℝ\nh1 : x > -1\n⊢ (1 + x) ^ 0 ≥\n>> 1 + ↑0 * x\n\n\nYou might think that linarith could prove the goal for the base case, but it can’t. The problem is the ↑0, which denotes the result of coercing the natural number 0 to a real number. Of course, that should be the real number 0, but is it? Yes, but the linarith tactic doesn’t know that. The theorem Nat.cast_zero says that ↑0 = 0 (where the 0 on the right side of the equation is the real number 0), so the tactic rewrite [Nat.cast_zero] will convert ↑0 to 0. After that step, linarith can complete the proof of the base case, and we can start on the induction step.\n\n\ntheorem Example_6_3_4 : ∀ (x : Real), x > -1 →\n ∀ (n : Nat), (1 + x) ^ n ≥ 1 + n * x := by\n fix x : Real\n assume h1 : x > -1\n by_induc\n · -- Base Case\n rewrite [Nat.cast_zero]\n linarith\n done\n · -- Induction Step\n fix n : Nat\n assume ih : (1 + x) ^ n ≥ 1 + n * x\n **done::\n done\n\n\ncase Induction_Step\nx : ℝ\nh1 : x > -1\nn : ℕ\nih : (1 + x) ^ n ≥\n>> 1 + ↑n * x\n⊢ (1 + x) ^ (n + 1) ≥\n>> 1 + ↑(n + 1) * x\n\n\nOnce again, there’s a complication caused by coercion. The inductive hypothesis talks about ↑n, but the goal involves ↑(n + 1). What is the relationship between these? Surely it should be the case that ↑(n + 1) = ↑n + 1; that is, the result of coercing the natural number n + 1 to a real number should be one larger than the result of coercing n to a real number. The theorem Nat.cast_succ says exactly that, so rewrite [Nat.cast_succ] will change the ↑(n + 1) in the goal to ↑n + 1. (The number n + 1 is sometimes called the successor of n, and succ is short for “successor.”) With that change, we can continue with the proof. The following proof is modeled on the proof in HTPI.\ntheorem ??Example_6_3_4:: : ∀ (x : Real), x > -1 →\n ∀ (n : Nat), (1 + x) ^ n ≥ 1 + n * x := by\n fix x : Real\n assume h1 : x > -1\n by_induc\n · -- Base Case\n rewrite [Nat.cast_zero]\n linarith\n done\n · -- Induction Step\n fix n : Nat\n assume ih : (1 + x) ^ n ≥ 1 + n * x\n rewrite [Nat.cast_succ]\n show (1 + x) ^ (n + 1) ≥ 1 + (n + 1) * x from\n calc (1 + x) ^ (n + 1)\n _ = (1 + x) * (1 + x) ^ n := by rfl\n _ ≥ (1 + x) * (1 + n * x) := sorry\n _ = 1 + x + n * x + n * x ^ 2 := by ring\n _ ≥ 1 + x + n * x + 0 := sorry\n _ = 1 + (n + 1) * x := by ring\n done\n done\nNote that in the calculational proof, each n or n + 1 that is multiplied by x is really ↑n or ↑n + 1, but we don’t need to say so explicitly; Lean fills in coercions automatically when they are required.\nAll that’s left is to replace the two occurrences of sorry with justifications. The first sorry step should follow from the inductive hypothesis by multiplying both sides by 1 + x, so a natural attempt to justify it would be by rel [ih]. Unfortunately, we get an error message saying that rel failed. The error message tells us that rel needed to know that 0 ≤ 1 + x, and it was unable to prove it, so we’ll have to provide a proof of that statement ourselves. Fortunately, linarith can handle it (deducing it from h1 : x > -1), and once we fill in that additional step, the rel tactic succeeds.\ntheorem ??Example_6_3_4:: : ∀ (x : Real), x > -1 →\n ∀ (n : Nat), (1 + x) ^ n ≥ 1 + n * x := by\n fix x : Real\n assume h1 : x > -1\n by_induc\n · -- Base Case\n rewrite [Nat.cast_zero]\n linarith\n done\n · -- Induction Step\n fix n : Nat\n assume ih : (1 + x) ^ n ≥ 1 + n * x\n rewrite [Nat.cast_succ]\n have h2 : 0 ≤ 1 + x := by linarith\n show (1 + x) ^ (n + 1) ≥ 1 + (n + 1) * x from\n calc (1 + x) ^ (n + 1)\n _ = (1 + x) * (1 + x) ^ n := by rfl\n _ ≥ (1 + x) * (1 + n * x) := by rel [ih]\n _ = 1 + x + n * x + n * x ^ 2 := by ring\n _ ≥ 1 + x + n * x + 0 := sorry\n _ = 1 + (n + 1) * x := by ring\n done\n done\nFor the second sorry step, we’ll need to know that n * x ^ 2 ≥ 0. To prove it, we start with the fact that the square of any real number is nonnegative:\n\n@sq_nonneg : ∀ {R : Type u_1} [inst : LinearOrderedRing R]\n (a : R), 0 ≤ a ^ 2\n\nAs usual, we don’t need to pay much attention to the implicit arguments; what is important is the last line, which tells us that sq_nonneg x is a proof of x ^ 2 ≥ 0. To get n * x ^ 2 ≥ 0 we just have to multiply both sides by n, which we can justify with the rel tactic, and then one more application of rel will handle the remaining sorry. Here is the complete proof:\ntheorem Example_6_3_4 : ∀ (x : Real), x > -1 →\n ∀ (n : Nat), (1 + x) ^ n ≥ 1 + n * x := by\n fix x : Real\n assume h1 : x > -1\n by_induc\n · -- Base Case\n rewrite [Nat.cast_zero]\n linarith\n done\n · -- Induction Step\n fix n : Nat\n assume ih : (1 + x) ^ n ≥ 1 + n * x\n rewrite [Nat.cast_succ]\n have h2 : 0 ≤ 1 + x := by linarith\n have h3 : x ^ 2 ≥ 0 := sq_nonneg x\n have h4 : n * x ^ 2 ≥ 0 :=\n calc n * x ^ 2\n _ ≥ n * 0 := by rel [h3]\n _ = 0 := by ring\n show (1 + x) ^ (n + 1) ≥ 1 + (n + 1) * x from\n calc (1 + x) ^ (n + 1)\n _ = (1 + x) * (1 + x) ^ n := by rfl\n _ ≥ (1 + x) * (1 + n * x) := by rel [ih]\n _ = 1 + x + n * x + n * x ^ 2 := by ring\n _ ≥ 1 + x + n * x + 0 := by rel [h4]\n _ = 1 + (n + 1) * x := by ring\n done\n done\nBefore ending this section, we’ll return to a topic left unexplained before. We can now describe how Sum i from k to n, f i is defined. The key is a function sum_seq, which is defined by recursion:\ndef sum_seq {A : Type} [AddZeroClass A]\n (m k : Nat) (f : Nat → A) : A :=\n match m with\n | 0 => 0\n | n + 1 => sum_seq n k f + f (k + n)\nTo get an idea of what this definition means, let’s try evaluating sum_seq 3 k f:\n\nsum_seq 3 k f = sum_seq 2 k f + f (k + 2)\n = sum_seq 1 k f + f (k + 1) + f (k + 2)\n = sum_seq 0 k f + f (k + 0) + f (k + 1) + f (k + 2)\n = 0 + f (k + 0) + f (k + 1) + f (k + 2)\n = f k + f (k + 1) + f (k + 2).\n\nSo sum_seq 3 k f adds up three consecutive values of f, starting with f k. More generally, sum_seq n k f adds up a sequence of n consecutive values of f, starting with f k. (The implicit arguments say that the type of the values of f can be any type for which + and 0 make sense.) The notation Sum i from k to n, f i is now defined to be a shorthand for sum_seq (n + 1 - k) k f. We’ll leave it to you to puzzle out why that gives the desired result.\n\nExercises\n\ntheorem Exercise_6_3_4 : ∀ (n : Nat),\n 3 * (Sum i from 0 to n, (2 * i + 1) ^ 2) =\n (n + 1) * (2 * n + 1) * (2 * n + 3) := sorry\n\n\ntheorem Exercise_6_3_7b (f : Nat → Real) (c : Real) : ∀ (n : Nat),\n Sum i from 0 to n, c * f i = c * Sum i from 0 to n, f i := sorry\n\n\ntheorem fact_pos : ∀ (n : Nat), fact n ≥ 1 := sorry\n\n\n--Hint: Use the theorem fact_pos from the previous exercise.\ntheorem Exercise_6_3_13a (k : Nat) : ∀ (n : Nat),\n fact (k ^ 2 + n) ≥ k ^ (2 * n) := sorry\n\n\n--Hint: Use the theorem in the previous exercise.\n--You may find it useful to first prove a lemma:\n--∀ (k : Nat), 2 * k ^ 2 + 1 ≥ k\ntheorem Exercise_6_3_13b (k : Nat) : ∀ n ≥ 2 * k ^ 2,\n fact n ≥ k ^ n := sorry\n\n6. A sequence is defined recursively as follows:\ndef seq_6_3_15 (k : Nat) : Int :=\n match k with\n | 0 => 0\n | n + 1 => 2 * seq_6_3_15 n + n\nProve the following theorem about this sequence:\ntheorem Exercise_6_3_15 : ∀ (n : Nat),\n seq_6_3_15 n = 2 ^ n - n - 1 := sorry\n7. A sequence is defined recursively as follows:\ndef seq_6_3_16 (k : Nat) : Nat :=\n match k with\n | 0 => 2\n | n + 1 => (seq_6_3_16 n) ^ 2\nFind a formula for seq_6_3_16 n. Fill in the blank in the theorem below with your formula and then prove the theorem.\ntheorem Exercise_6_3_16 : ∀ (n : Nat),\n seq_6_3_16 n = ___ := sorry" + "text": "6.3. Recursion\nIn the last two sections, we saw that we can prove that all natural numbers have some property by proving that 0 has the property, and also that for every natural number \\(n\\), if \\(n\\) has the property then so does \\(n + 1\\). In this section we will see that a similar idea can be used to define a function whose domain is the natural numbers. We can define a function \\(f\\) with domain \\(\\mathbb{N}\\) by specifying the value of \\(f(0)\\), and also saying how to compute \\(f(n+1)\\) if you already know the value of \\(f(n)\\).\nFor example, we can define a function \\(f : \\mathbb{N} \\to \\mathbb{N}\\) as follows:\n\n\\(f(0) = 1\\); for every \\(n \\in \\mathbb{N}\\), \\(f(n+1) = (n+1) \\cdot f(n)\\).\n\nHere is the same definition written in Lean. (For reasons that will become clear shortly, we have given the function the name fact.)\ndef fact (k : Nat) : Nat :=\n match k with\n | 0 => 1\n | n + 1 => (n + 1) * fact n\nLean can use this definition to compute fact k for any natural number k. The match statement tells Lean to try to match the input k with one of the two patterns 0 and n + 1, and then to use the corresponding formula after => to compute fact k. For example, if we ask Lean for fact 4, it first checks if 4 matches 0. Since it doesn’t, it goes on to the next line and determines that 4 matches the pattern n + 1, with n = 3, so it uses the formula fact 4 = 4 * fact 3. Of course, now it must compute fact 3, which it does in the same way: 3 matches n + 1 with n = 2, so fact 3 = 3 * fact 2. Continuing in this way, Lean determines that\n\nfact 4 = 4 * fact 3 = 4 * (3 * fact 2) = 4 * (3 * (2 * fact 1))\n = 4 * (3 * (2 * (1 * fact 0))) = 4 * (3 * (2 * (1 * 1))) = 24.\n\nYou can confirm this with the #eval command:\n#eval fact 4 --Answer: 24\nOf course, by now you have probably guessed why we used the name fact for his function: fact k is k factorial—the product of all the numbers from 1 to k.\nThis style of definition is called a recursive definition. If a function is defined by a recursive definition, then theorems about that function are often most easily proven by induction. For example, here is a theorem about the factorial function. It is Example 6.3.1 in HTPI, and we begin the Lean proof by imitating the proof in HTPI.\ntheorem ??Example_6_3_1:: : ∀ n ≥ 4, fact n > 2 ^ n := by\n by_induc\n · -- Base Case\n decide\n done\n · -- Induction Step\n fix n : Nat\n assume h1 : n ≥ 4\n assume ih : fact n > 2 ^ n\n show fact (n + 1) > 2 ^ (n + 1) from\n calc fact (n + 1)\n _ = (n + 1) * fact n := by rfl\n _ > (n + 1) * 2 ^ n := sorry\n _ > 2 * 2 ^ n := sorry\n _ = 2 ^ (n + 1) := by ring\n done\n done\nThere are two steps in the calculational proof at the end that require justification. The first says that (n + 1) * fact n > (n + 1) * 2 ^ n, which should follow from the inductive hypothesis ih : fact n > 2 ^ n by multiplying both sides by n + 1. Is there a theorem that would justify this inference?\nThis may remind you of a step in Example_6_1_3 where we used the theorem Nat.mul_le_mul_right, which says ∀ {n m : ℕ} (k : ℕ), n ≤ m → n * k ≤ m * k. Our situation in this example is similar, but it involves a strict inequality (> rather than ≥) and it involves multiplying on the left rather than the right. Many theorems about inequalities in Lean’s library contain either le (for “less than or equal to”) or lt (for “less than”) in their names, but they can also be used to prove statements involving ≥ or >. Perhaps the theorem we need is named something like Nat.mul_lt_mul_left. If you type #check @Nat.mul_lt_mul_ into VS Code, a pop-up window will appear listing several theorems that begin with Nat.mul_lt_mul_. There is no Nat.mul_lt_mul_left, but there is a theorem called Nat.mul_lt_mul_of_pos_left, and its meaning is\n\n@Nat.mul_lt_mul_of_pos_left : ∀ {n m k : ℕ},\n n < m → k > 0 → k * n < k * m\n\nLean has correctly reminded us that, to multiply both sides of a strict inequality by a number k, we need to know that k > 0. So in our case, we’ll need to prove that n + 1 > 0. Once we have that, we can use the theorem Nat.mul_lt_mul_of_pos_left to eliminate the first sorry.\nThe second sorry is similar: (n + 1) * 2 ^ n > 2 * 2 ^ n should follow from n + 1 > 2 and 2 ^ n > 0, and you can verify that the theorem that will justify this inference is Nat.mul_lt_mul_of_pos_right.\nSo we have three inequalities that we need to prove before we can justify the steps of the calculational proof: n + 1 > 0, n + 1 > 2, and 2 ^ n > 0. We’ll insert have steps before the calculational proof to assert these three inequalities. If you try it, you’ll find that linarith can prove the first two, but not the third.\nHow can we prove 2 ^ n > 0? It is often helpful to think about whether there is a general principle that is behind a statement we are trying to prove. In our case, the inequality 2 ^ n > 0 is an instance of the general fact that if m and n are any natural numbers with m > 0, then m ^ n > 0. Maybe that fact is in Lean’s library:\nexample (m n : Nat) (h : m > 0) : m ^ n > 0 := by ++apply?::\nThe apply? tactic comes up with exact Nat.pos_pow_of_pos n h, and #check @Nat.pos_pow_of_pos gives the result\n\n@Nat.pos_pow_of_pos : ∀ {n : ℕ} (m : ℕ), 0 < n → 0 < n ^ m\n\nThat means that we can use Nat.pos_pow_of_pos to prove 2 ^ n > 0, but first we’ll need to prove that 2 > 0. We now have all the pieces we need; putting them together leads to this proof:\ntheorem Example_6_3_1 : ∀ n ≥ 4, fact n > 2 ^ n := by\n by_induc\n · -- Base Case\n decide\n done\n · -- Induction Step\n fix n : Nat\n assume h1 : n ≥ 4\n assume ih : fact n > 2 ^ n\n have h2 : n + 1 > 0 := by linarith\n have h3 : n + 1 > 2 := by linarith\n have h4 : 2 > 0 := by linarith\n have h5 : 2 ^ n > 0 := Nat.pos_pow_of_pos n h4\n show fact (n + 1) > 2 ^ (n + 1) from\n calc fact (n + 1)\n _ = (n + 1) * fact n := by rfl\n _ > (n + 1) * 2 ^ n := Nat.mul_lt_mul_of_pos_left ih h2\n _ > 2 * 2 ^ n := Nat.mul_lt_mul_of_pos_right h3 h5\n _ = 2 ^ (n + 1) := by ring\n done\n done\nBut there is an easier way. Look at the two “>” steps in the calculational proof at the end of Example_6_3_1. In both cases, we took a known relationship between two quantities and did something to both sides that preserved the relationship. In the first case, the known relationship was ih : fact n > 2 ^ n, and we multiplied both sides by n + 1 on the left; in the second, the known relationship was h3 : n + 1 > 2, and we multiplied both sides by 2 ^ n on the right. To justify these steps, we had to find the right theorems in Lean’s library, and we ended up needing auxiliary positivity facts: h2 : n + 1 > 0 in the first case and h5 : 2 ^ n > 0 in the second. There is a tactic that can simplify these steps: if h is a proof of a statement asserting a relationship between two quantities, then the tactic rel [h] will attempt to prove any statement obtained from that relationship by applying the same operation to both sides. The tactic will try to find a theorem in Lean’s library that says that the operation preserves the relationship, and if the theorem requires auxiliary positivity facts, it will try to prove those facts as well. The rel tactic doesn’t always succeed, but when it does, it saves you the trouble of searching through the library for the necessary theorems. In this case, the tactic allows us to give a much simpler proof of Example_6_3_1:\ntheorem Example_6_3_1 : ∀ n ≥ 4, fact n > 2 ^ n := by\n by_induc\n · -- Base Case\n decide\n done\n · -- Induction Step\n fix n : Nat\n assume h1 : n ≥ 4\n assume ih : fact n > 2 ^ n\n have h2 : n + 1 > 2 := by linarith\n show fact (n + 1) > 2 ^ (n + 1) from\n calc fact (n + 1)\n _ = (n + 1) * fact n := by rfl\n _ > (n + 1) * 2 ^ n := by rel [ih]\n _ > 2 * 2 ^ n := by rel [h2]\n _ = 2 ^ (n + 1) := by ring\n done\n done\nThe next example in HTPI is a proof of one of the laws of exponents: a ^ (m + n) = a ^ m * a ^ n. Lean’s definition of exponentiation with natural number exponents is recursive. The definitions Lean uses are essentially as follows:\n--For natural numbers b and k, b ^ k = nat_pow b k:\ndef nat_pow (b k : Nat) : Nat :=\n match k with\n | 0 => 1\n | n + 1 => (nat_pow b n) * b\n\n--For a real number b and a natural number k, b ^ k = real_pow b k:\ndef real_pow (b : Real) (k : Nat) : Real :=\n match k with\n | 0 => 1\n | n + 1 => (real_pow b n) * b\nLet’s prove the addition law for exponents:\ntheorem Example_6_3_2_cheating : ∀ (a : Real) (m n : Nat),\n a ^ (m + n) = a ^ m * a ^ n := by\n fix a : Real; fix m : Nat; fix n : Nat\n ring\n done\nWell, that wasn’t really fair. The ring tactic knows the laws of exponents, so it has no trouble proving this theorem. But we want to know why the law holds, so let’s see if we can prove it without using ring. The following proof is essentially the same as the proof in HTPI:\ntheorem Example_6_3_2 : ∀ (a : Real) (m n : Nat),\n a ^ (m + n) = a ^ m * a ^ n := by\n fix a : Real; fix m : Nat\n --Goal : ∀ (n : Nat), a ^ (m + n) = a ^ m * a ^ n\n by_induc\n · -- Base Case\n show a ^ (m + 0) = a ^ m * a ^ 0 from\n calc a ^ (m + 0)\n _ = a ^ m := by rfl\n _ = a ^ m * 1 := (mul_one (a ^ m)).symm\n _ = a ^ m * a ^ 0 := by rfl\n done\n · -- Induction Step\n fix n : Nat\n assume ih : a ^ (m + n) = a ^ m * a ^ n\n show a ^ (m + (n + 1)) = a ^ m * a ^ (n + 1) from\n calc a ^ (m + (n + 1))\n _ = a ^ ((m + n) + 1) := by rw [add_assoc]\n _ = a ^ (m + n) * a := by rfl\n _ = (a ^ m * a ^ n) * a := by rw [ih]\n _ = a ^ m * (a ^ n * a) := by rw [mul_assoc]\n _ = a ^ m * (a ^ (n + 1)) := by rfl\n done\n done\nFinally, we’ll prove the theorem in Example 6.3.4 of HTPI, which again involves exponentiation with natural number exponents. Here’s the beginning of the proof:\n\n\ntheorem Example_6_3_4 : ∀ (x : Real), x > -1 →\n ∀ (n : Nat), (1 + x) ^ n ≥ 1 + n * x := by\n fix x : Real\n assume h1 : x > -1\n **done::\n\n\nx : ℝ\nh1 : x > -1\n⊢ ∀ (n : ℕ),\n>> (1 + x) ^ n ≥\n>> 1 + ↑n * x\n\n\nLook carefully at the goal in the tactic state. Why is there a ↑ before the last n? The reason has to do with types. The variable x has type Real and n has type Nat, so how can Lean multiply n by x? Remember, in Lean, the natural numbers are not a subset of the real numbers. The two types are completely separate, but for each natural number, there is a corresponding real number. To multiply n by x, Lean had to convert n to the corresponding real number, through a process called coercion. The notation ↑n denotes the result of coercing (or casting) n to another type—in this case, Real. Since ↑n and x are both real numbers, Lean can use the multiplication operation on the real numbers to multiply them. (To type ↑ in VSCode, type \\uparrow, or just \\u.)\nAs we will see, the need for coercion in this example will make the proof a bit more complicated, because we’ll need to use some theorems about coercions. Theorems about coercion of natural numbers to some other type often have names that start Nat.cast.\nContinuing with the proof, since exponentiation is defined recursively, let’s try mathematical induction:\n\n\ntheorem Example_6_3_4 : ∀ (x : Real), x > -1 →\n ∀ (n : Nat), (1 + x) ^ n ≥ 1 + n * x := by\n fix x : Real\n assume h1 : x > -1\n by_induc\n · -- Base Case\n\n **done::\n · -- Induction Step\n\n **done:: \n done\n\n\ncase Base_Case\nx : ℝ\nh1 : x > -1\n⊢ (1 + x) ^ 0 ≥\n>> 1 + ↑0 * x\n\n\nYou might think that linarith could prove the goal for the base case, but it can’t. The problem is the ↑0, which denotes the result of coercing the natural number 0 to a real number. Of course, that should be the real number 0, but is it? Yes, but the linarith tactic doesn’t know that. The theorem Nat.cast_zero says that ↑0 = 0 (where the 0 on the right side of the equation is the real number 0), so the tactic rewrite [Nat.cast_zero] will convert ↑0 to 0. After that step, linarith can complete the proof of the base case, and we can start on the induction step.\n\n\ntheorem Example_6_3_4 : ∀ (x : Real), x > -1 →\n ∀ (n : Nat), (1 + x) ^ n ≥ 1 + n * x := by\n fix x : Real\n assume h1 : x > -1\n by_induc\n · -- Base Case\n rewrite [Nat.cast_zero]\n linarith\n done\n · -- Induction Step\n fix n : Nat\n assume ih : (1 + x) ^ n ≥ 1 + n * x\n **done::\n done\n\n\ncase Induction_Step\nx : ℝ\nh1 : x > -1\nn : ℕ\nih : (1 + x) ^ n ≥\n>> 1 + ↑n * x\n⊢ (1 + x) ^ (n + 1) ≥\n>> 1 + ↑(n + 1) * x\n\n\nOnce again, there’s a complication caused by coercion. The inductive hypothesis talks about ↑n, but the goal involves ↑(n + 1). What is the relationship between these? Surely it should be the case that ↑(n + 1) = ↑n + 1; that is, the result of coercing the natural number n + 1 to a real number should be one larger than the result of coercing n to a real number. The theorem Nat.cast_succ says exactly that, so rewrite [Nat.cast_succ] will change the ↑(n + 1) in the goal to ↑n + 1. (The number n + 1 is sometimes called the successor of n, and succ is short for “successor.”) With that change, we can continue with the proof. The following proof is modeled on the proof in HTPI.\ntheorem ??Example_6_3_4:: : ∀ (x : Real), x > -1 →\n ∀ (n : Nat), (1 + x) ^ n ≥ 1 + n * x := by\n fix x : Real\n assume h1 : x > -1\n by_induc\n · -- Base Case\n rewrite [Nat.cast_zero]\n linarith\n done\n · -- Induction Step\n fix n : Nat\n assume ih : (1 + x) ^ n ≥ 1 + n * x\n rewrite [Nat.cast_succ]\n show (1 + x) ^ (n + 1) ≥ 1 + (n + 1) * x from\n calc (1 + x) ^ (n + 1)\n _ = (1 + x) ^ n * (1 + x) := by rfl\n _ ≥ (1 + n * x) * (1 + x) := sorry\n _ = 1 + n * x + x + n * x ^ 2 := by ring\n _ ≥ 1 + n * x + x + 0 := sorry\n _ = 1 + (n + 1) * x := by ring\n done\n done\nNote that in the calculational proof, each n or n + 1 that is multiplied by x is really ↑n or ↑n + 1, but we don’t need to say so explicitly; Lean fills in coercions automatically when they are required.\nAll that’s left is to replace the two occurrences of sorry with justifications. The first sorry step should follow from the inductive hypothesis by multiplying both sides by 1 + x, so a natural attempt to justify it would be by rel [ih]. Unfortunately, we get an error message saying that rel failed. The error message tells us that rel needed to know that 0 ≤ 1 + x, and it was unable to prove it, so we’ll have to provide a proof of that statement ourselves. Fortunately, linarith can handle it (deducing it from h1 : x > -1), and once we fill in that additional step, the rel tactic succeeds.\ntheorem ??Example_6_3_4:: : ∀ (x : Real), x > -1 →\n ∀ (n : Nat), (1 + x) ^ n ≥ 1 + n * x := by\n fix x : Real\n assume h1 : x > -1\n by_induc\n · -- Base Case\n rewrite [Nat.cast_zero]\n linarith\n done\n · -- Induction Step\n fix n : Nat\n assume ih : (1 + x) ^ n ≥ 1 + n * x\n rewrite [Nat.cast_succ]\n have h2 : 0 ≤ 1 + x := by linarith\n show (1 + x) ^ (n + 1) ≥ 1 + (n + 1) * x from\n calc (1 + x) ^ (n + 1)\n _ = (1 + x) ^ n * (1 + x) := by rfl\n _ ≥ (1 + n * x) * (1 + x) := by rel [ih]\n _ = 1 + n * x + x + n * x ^ 2 := by ring\n _ ≥ 1 + n * x + x + 0 := sorry\n _ = 1 + (n + 1) * x := by ring\n done\n done\nFor the second sorry step, we’ll need to know that n * x ^ 2 ≥ 0. To prove it, we start with the fact that the square of any real number is nonnegative:\n\n@sq_nonneg : ∀ {α : Type u_1} [inst : LinearOrderedSemiring α]\n [inst_1 : ExistsAddOfLE α]\n (a : α), 0 ≤ a ^ 2\n\nAs usual, we don’t need to pay much attention to the implicit arguments; what is important is the last line, which tells us that sq_nonneg x is a proof of x ^ 2 ≥ 0. To get n * x ^ 2 ≥ 0 we just have to multiply both sides by n, which we can justify with the rel tactic, and then one more application of rel will handle the remaining sorry. Here is the complete proof:\ntheorem Example_6_3_4 : ∀ (x : Real), x > -1 →\n ∀ (n : Nat), (1 + x) ^ n ≥ 1 + n * x := by\n fix x : Real\n assume h1 : x > -1\n by_induc\n · -- Base Case\n rewrite [Nat.cast_zero]\n linarith\n done\n · -- Induction Step\n fix n : Nat\n assume ih : (1 + x) ^ n ≥ 1 + n * x\n rewrite [Nat.cast_succ]\n have h2 : 0 ≤ 1 + x := by linarith\n have h3 : x ^ 2 ≥ 0 := sq_nonneg x\n have h4 : n * x ^ 2 ≥ 0 :=\n calc n * x ^ 2\n _ ≥ n * 0 := by rel [h3]\n _ = 0 := by ring\n show (1 + x) ^ (n + 1) ≥ 1 + (n + 1) * x from\n calc (1 + x) ^ (n + 1)\n _ = (1 + x) ^ n * (1 + x) := by rfl\n _ ≥ (1 + n * x) * (1 + x) := by rel [ih]\n _ = 1 + n * x + x + n * x ^ 2 := by ring\n _ ≥ 1 + n * x + x + 0 := by rel [h4]\n _ = 1 + (n + 1) * x := by ring\n done\n done\nBefore ending this section, we’ll return to a topic left unexplained before. We can now describe how Sum i from k to n, f i is defined. The key is a function sum_seq, which is defined by recursion:\ndef sum_seq {A : Type} [AddZeroClass A]\n (m k : Nat) (f : Nat → A) : A :=\n match m with\n | 0 => 0\n | n + 1 => sum_seq n k f + f (k + n)\nTo get an idea of what this definition means, let’s try evaluating sum_seq 3 k f:\n\nsum_seq 3 k f = sum_seq 2 k f + f (k + 2)\n = sum_seq 1 k f + f (k + 1) + f (k + 2)\n = sum_seq 0 k f + f (k + 0) + f (k + 1) + f (k + 2)\n = 0 + f (k + 0) + f (k + 1) + f (k + 2)\n = f k + f (k + 1) + f (k + 2).\n\nSo sum_seq 3 k f adds up three consecutive values of f, starting with f k. More generally, sum_seq n k f adds up a sequence of n consecutive values of f, starting with f k. (The implicit arguments say that the type of the values of f can be any type for which + and 0 make sense.) The notation Sum i from k to n, f i is now defined to be a shorthand for sum_seq (n + 1 - k) k f. We’ll leave it to you to puzzle out why that gives the desired result.\n\nExercises\n\ntheorem Exercise_6_3_4 : ∀ (n : Nat),\n 3 * (Sum i from 0 to n, (2 * i + 1) ^ 2) =\n (n + 1) * (2 * n + 1) * (2 * n + 3) := sorry\n\n\ntheorem Exercise_6_3_7b (f : Nat → Real) (c : Real) : ∀ (n : Nat),\n Sum i from 0 to n, c * f i = c * Sum i from 0 to n, f i := sorry\n\n\ntheorem fact_pos : ∀ (n : Nat), fact n ≥ 1 := sorry\n\n\n--Hint: Use the theorem fact_pos from the previous exercise.\ntheorem Exercise_6_3_13a (k : Nat) : ∀ (n : Nat),\n fact (k ^ 2 + n) ≥ k ^ (2 * n) := sorry\n\n\n--Hint: Use the theorem in the previous exercise.\n--You may find it useful to first prove a lemma:\n--∀ (k : Nat), 2 * k ^ 2 + 1 ≥ k\ntheorem Exercise_6_3_13b (k : Nat) : ∀ n ≥ 2 * k ^ 2,\n fact n ≥ k ^ n := sorry\n\n6. A sequence is defined recursively as follows:\ndef seq_6_3_15 (k : Nat) : Int :=\n match k with\n | 0 => 0\n | n + 1 => 2 * seq_6_3_15 n + n\nProve the following theorem about this sequence:\ntheorem Exercise_6_3_15 : ∀ (n : Nat),\n seq_6_3_15 n = 2 ^ n - n - 1 := sorry\n7. A sequence is defined recursively as follows:\ndef seq_6_3_16 (k : Nat) : Nat :=\n match k with\n | 0 => 2\n | n + 1 => (seq_6_3_16 n) ^ 2\nFind a formula for seq_6_3_16 n. Fill in the blank in the theorem below with your formula and then prove the theorem.\ntheorem Exercise_6_3_16 : ∀ (n : Nat),\n seq_6_3_16 n = ___ := sorry" }, { "objectID": "Chap6.html#strong-induction", "href": "Chap6.html#strong-induction", "title": "6  Mathematical Induction", "section": "6.4. Strong Induction", - "text": "6.4. Strong Induction\nIn the induction step of a proof by mathematical induction, we prove that a natural number has some property from the assumption that the previous number has the property. Section 6.4 of HTPI introduces a version of mathematical induction in which we get to assume that all smaller numbers have the property. Since this is a stronger assumption, this version of induction is called strong induction. Here is how strong induction works (HTPI p. 304):\n\nTo prove a goal of the form ∀ (n : Nat), P n:\n\nProve that ∀ (n : Nat), (∀ n_1 < n, P n_1) → P n.\n\nTo write a proof by strong induction in Lean, we use the tactic by_strong_induc, whose effect on the tactic state can be illustrated as follows.\n\n\n>> ⋮\n⊢ ∀ (n : Nat), P n\n\n\n>> ⋮\n⊢ ∀ (n : Nat),\n>> (∀ n_1 < n, P n_1) → P n\n\n\nTo illustrate this, we begin with Example 6.4.1 of HTPI.\ntheorem Example_6_4_1 : ∀ m > 0, ∀ (n : Nat),\n ∃ (q r : Nat), n = m * q + r ∧ r < m\nImitating the strategy of the proof in HTPI, we let m be an arbitrary natural number, assume m > 0, and then prove the statement ∀ (n : Nat), ∃ (q r : Nat), n = m * q + r ∧ r < m by strong induction. That means that after introducing an arbitrary natural number n, we assume the inductive hypothesis, which says ∀ n_1 < n, ∃ (q r : Nat), n_1 = m * q + r ∧ r < m.\ntheorem Example_6_4_1 : ∀ m > 0, ∀ (n : Nat),\n ∃ (q r : Nat), n = m * q + r ∧ r < m := by\n fix m : Nat\n assume h1 : m > 0\n by_strong_induc\n fix n : Nat\n assume ih : ∀ n_1 < n, ∃ (q r : Nat), n_1 = m * q + r ∧ r < m\n **done::\nOur goal now is to prove that ∃ (q r : Nat), n = m * q + r ∧ r < m. Although strong induction does not require a base case, it is not uncommon for proofs by strong induction to involve reasoning by cases. The proof in HTPI uses cases based on whether or not n < m. If n < m, then the proof is easy: the numbers q = 0 and r​ = n clearly have the required properties. If ¬n < m, then we can write n as n = k + m, for some natural number k. Since m > 0, we have k < n, so we can apply the inductive hypothesis to k. Notice that if m > 1, then k is not the number immediately preceding n; that’s why this proof uses strong induction rather than ordinary induction.\nHow do we come up with the number k in the previous paragraph? We’ll use a theorem from Lean’s library. There are two slightly different versions of the theorem—notice that the first ends with m + k and the second ends with k + m:\n\n@Nat.exists_eq_add_of_le : ∀ {m n : ℕ}, m ≤ n → ∃ (k : ℕ), n = m + k\n\n@Nat.exists_eq_add_of_le' : ∀ {m n : ℕ}, m ≤ n → ∃ (k : ℕ), n = k + m\n\nWe’ll use the second version in our proof.\ntheorem Example_6_4_1 : ∀ m > 0, ∀ (n : Nat),\n ∃ (q r : Nat), n = m * q + r ∧ r < m := by\n fix m : Nat\n assume h1 : m > 0\n by_strong_induc\n fix n : Nat\n assume ih : ∀ n_1 < n, ∃ (q r : Nat), n_1 = m * q + r ∧ r < m\n by_cases h2 : n < m\n · -- Case 1. h2 : n < m\n apply Exists.intro 0\n apply Exists.intro n --Goal : n = m * 0 + n ∧ n < m\n apply And.intro _ h2\n ring\n done\n · -- Case 2. h2 : ¬n < m\n have h3 : m ≤ n := by linarith\n obtain (k : Nat) (h4 : n = k + m) from Nat.exists_eq_add_of_le' h3\n have h5 : k < n := by linarith\n have h6 : ∃ (q r : Nat), k = m * q + r ∧ r < m := ih k h5\n obtain (q' : Nat)\n (h7 : ∃ (r : Nat), k = m * q' + r ∧ r < m) from h6\n obtain (r' : Nat) (h8 : k = m * q' + r' ∧ r' < m) from h7\n apply Exists.intro (q' + 1)\n apply Exists.intro r' --Goal : n = m * (q' + 1) + r' ∧ r' < m\n apply And.intro _ h8.right\n show n = m * (q' + 1) + r' from\n calc n\n _ = k + m := h4\n _ = m * q' + r' + m := by rw [h8.left]\n _ = m * (q' + 1) + r' := by ring\n done\n done\nThe numbers q and r in Example_6_4_1 are called the quotient and remainder when n is divided by m. Lean knows how to compute these numbers: if n and m are natural numbers, then in Lean, n / m denotes the quotient when n is divided by m, and n % m denotes the remainder. (The number n % m is also sometimes called n modulo m, or n mod m.) And Lean knows theorems stating that these numbers have the properties specified in Example_6_4_1:\n\n@Nat.div_add_mod : ∀ (m n : ℕ), n * (m / n) + m % n = m\n\n@Nat.mod_lt : ∀ (x : ℕ) {y : ℕ}, y > 0 → x % y < y\n\nBy the way, although we are unlikely to want to use the notation n / 0 or n % 0, Lean uses the definitions n / 0 = 0 and n % 0 = n. As a result, the equation n * (m / n) + m % n = m is true even if n = 0. That’s why the theorem Nat.div_add_mod doesn’t include a requirement that n > 0. It is important to keep in mind that division of natural numbers is not the same as division of real numbers. For example, dividing the natural number 5 by the natural number 2 gives a quotient of 2 (with a remainder of 1), so (5 : Nat) / (2 : Nat) is 2, but (5 : Real) / (2 : Real) is 2.5.\nThere is also a strong form of recursion. As an example of this, here is a recursive definition of a sequence of numbers called the Fibonacci numbers:\ndef Fib (n : Nat) : Nat :=\n match n with\n | 0 => 0\n | 1 => 1\n | k + 2 => Fib k + Fib (k + 1)\nNotice that the formula for Fib (k + 2) involves the two previous values of Fib, not just the immediately preceding value. That is the sense in which the recursion is strong. Not surprisingly, theorems about the Fibonacci numbers are often proven by induction—either ordinary or strong. We’ll illustrate this with a proof by strong induction that ∀ (n : Nat), Fib n < 2 ^ n. This time we’ll need to treat the cases n = 0 and n = 1 separately, since these values are treated separately in the definition of Fib n. And we’ll need to know that if n doesn’t fall into either of those cases, then it falls into the third case: n = k + 2 for some natural number k. Since similar ideas will come up several times in the rest of this book, it will be useful to begin by proving lemmas that will help with this kind of reasoning.\nWe’ll need two theorems from Lean’s library, the second of which has two slightly different versions:\n\n@Nat.pos_of_ne_zero : ∀ {n : ℕ}, n ≠ 0 → 0 < n\n\n@lt_of_le_of_ne : ∀ {α : Type u_1} [inst : PartialOrder α] {a b : α},\n a ≤ b → a ≠ b → a < b\n\n@lt_of_le_of_ne' : ∀ {α : Type u_1} [inst : PartialOrder α] {a b : α},\n a ≤ b → b ≠ a → a < b\n\nIf we have h1 : n ≠ 0, then Nat.pos_of_ne_zero h1 is a proof of 0 < n. But for natural numbers a and b, Lean treats a < b as meaning the same thing as a + 1 ≤ b, so this is also a proof of 1 ≤ n. If we also have h2 : n ≠ 1, then we can use lt_of_le_of_ne' to conclude 1 < n, which is definitionally equal to 2 ≤ n. Combining this reasoning with the theorem Nat.exists_eq_add_of_le', which we used in the last example, we can prove two lemmas that will be helpful for reasoning in which the first one or two natural numbers have to be treated separately.\nlemma exists_eq_add_one_of_ne_zero {n : Nat}\n (h1 : n ≠ 0) : ∃ (k : Nat), n = k + 1 := by\n have h2 : 1 ≤ n := Nat.pos_of_ne_zero h1\n show ∃ (k : Nat), n = k + 1 from Nat.exists_eq_add_of_le' h2\n done\n\ntheorem exists_eq_add_two_of_ne_zero_one {n : Nat}\n (h1 : n ≠ 0) (h2 : n ≠ 1) : ∃ (k : Nat), n = k + 2 := by\n have h3 : 1 ≤ n := Nat.pos_of_ne_zero h1\n have h4 : 2 ≤ n := lt_of_le_of_ne' h3 h2\n show ∃ (k : Nat), n = k + 2 from Nat.exists_eq_add_of_le' h4\n done\nWith this preparation, we can present the proof:\nexample : ∀ (n : Nat), Fib n < 2 ^ n := by\n by_strong_induc\n fix n : Nat\n assume ih : ∀ n_1 < n, Fib n_1 < 2 ^ n_1\n by_cases h1 : n = 0\n · -- Case 1. h1 : n = 0\n rewrite [h1] --Goal : Fib 0 < 2 ^ 0\n decide\n done\n · -- Case 2. h1 : ¬n = 0\n by_cases h2 : n = 1\n · -- Case 2.1. h2 : n = 1\n rewrite [h2]\n decide\n done\n · -- Case 2.2. h2 : ¬n = 1\n obtain (k : Nat) (h3 : n = k + 2) from\n exists_eq_add_two_of_ne_zero_one h1 h2\n have h4 : k < n := by linarith\n have h5 : Fib k < 2 ^ k := ih k h4\n have h6 : k + 1 < n := by linarith\n have h7 : Fib (k + 1) < 2 ^ (k + 1) := ih (k + 1) h6\n rewrite [h3] --Goal : Fib (k + 2) < 2 ^ (k + 2)\n show Fib (k + 2) < 2 ^ (k + 2) from\n calc Fib (k + 2)\n _ = Fib k + Fib (k + 1) := by rfl\n _ < 2 ^ k + Fib (k + 1) := by rel [h5]\n _ < 2 ^ k + 2 ^ (k + 1) := by rel [h7]\n _ ≤ 2 ^ k + 2 ^ (k + 1) + 2 ^ k := by linarith\n _ = 2 ^ (k + 2) := by ring\n done\n done\n done\nAs with ordinary induction, strong induction can be useful for proving statements that do not at first seem to have the form ∀ (n : Nat), .... To illustrate this, we’ll prove the well-ordering principle, which says that if a set S : Set Nat is nonempty, then it has a smallest element. We’ll prove the contrapositive: if S has no smallest element, then it is empty. To say that S is empty means ∀ (n : Nat), n ∉ S, and that’s the statement to which we will apply strong induction.\ntheorem well_ord_princ (S : Set Nat) : (∃ (n : Nat), n ∈ S) →\n ∃ n ∈ S, ∀ m ∈ S, n ≤ m := by\n contrapos\n assume h1 : ¬∃ n ∈ S, ∀ m ∈ S, n ≤ m\n quant_neg --Goal : ∀ (n : Nat), n ∉ S\n by_strong_induc\n fix n : Nat\n assume ih : ∀ n_1 < n, n_1 ∉ S --Goal : n ∉ S\n contradict h1 with h2 --h2 : n ∈ S\n --Goal : ∃ n ∈ S, ∀ m ∈ S, n ≤ m\n apply Exists.intro n --Goal : n ∈ S ∧ ∀ m ∈ S, n ≤ m\n apply And.intro h2 --Goal : ∀ m ∈ S, n ≤ m\n fix m : Nat\n assume h3 : m ∈ S\n have h4 : m < n → m ∉ S := ih m\n contrapos at h4 --h4 : m ∈ S → ¬m < n\n have h5 : ¬m < n := h4 h3\n linarith\n done\nSection 6.4 of HTPI ends with an example of an application of the well ordering principle. The example gives a proof that \\(\\sqrt{2}\\) is irrational. If \\(\\sqrt{2}\\) were rational, then there would be natural numbers \\(p\\) and \\(q\\) such that \\(q \\ne 0\\) and \\(p/q = \\sqrt{2}\\), and therefore \\(p^2 = 2q^2\\). So we can prove that \\(\\sqrt{2}\\) is irrational by showing that there do not exist natural numbers \\(p\\) and \\(q\\) such that \\(q \\ne 0\\) and \\(p^2 = 2q^2\\).\nThe proof uses a definition from the exercises of Section 6.1:\ndef nat_even (n : Nat) : Prop := ∃ (k : Nat), n = 2 * k\nWe will also use the following lemma, whose proof we leave as an exercise for you:\nlemma sq_even_iff_even (n : Nat) : nat_even (n * n) ↔ nat_even n := sorry\nAnd we’ll need another theorem that we haven’t seen before:\n\n@mul_left_cancel_iff_of_pos : ∀ {α : Type u_1} {a b c : α}\n [inst : MulZeroClass α] [inst_1 : PartialOrder α]\n [inst_2 : PosMulMonoRev α],\n 0 < a → (a * b = a * c ↔ b = c)\n\nTo show that \\(\\sqrt{2}\\) is irrational, we will prove the statement\n\n¬∃ (q p : Nat), p * p = 2 * (q * q) ∧ q ≠ 0\n\nWe proceed by contradiction. If this statement were false, then the set\n\nS = {q : Nat | ∃ (p : Nat), p * p = 2 * (q * q) ∧ q ≠ 0}\n\nwould be nonempty, and therefore, by the well ordering principle, it would have a smallest element. We then show that this leads to a contradiction. Here is the proof.\ntheorem Theorem_6_4_5 :\n ¬∃ (q p : Nat), p * p = 2 * (q * q) ∧ q ≠ 0 := by\n set S : Set Nat :=\n {q : Nat | ∃ (p : Nat), p * p = 2 * (q * q) ∧ q ≠ 0}\n by_contra h1\n have h2 : ∃ (q : Nat), q ∈ S := h1\n have h3 : ∃ q ∈ S, ∀ r ∈ S, q ≤ r := well_ord_princ S h2\n obtain (q : Nat) (h4 : q ∈ S ∧ ∀ r ∈ S, q ≤ r) from h3\n have qinS : q ∈ S := h4.left\n have qleast : ∀ r ∈ S, q ≤ r := h4.right\n define at qinS --qinS : ∃ (p : Nat), p * p = 2 * (q * q) ∧ q ≠ 0\n obtain (p : Nat) (h5 : p * p = 2 * (q * q) ∧ q ≠ 0) from qinS\n have pqsqrt2 : p * p = 2 * (q * q) := h5.left\n have qne0 : q ≠ 0 := h5.right\n have h6 : nat_even (p * p) := Exists.intro (q * q) pqsqrt2\n rewrite [sq_even_iff_even p] at h6 --h6 : nat_even p\n obtain (p' : Nat) (p'halfp : p = 2 * p') from h6\n have h7 : 2 * (2 * (p' * p')) = 2 * (q * q) := by\n rewrite [←pqsqrt2, p'halfp]\n ring\n done\n have h8 : 2 > 0 := by decide\n rewrite [mul_left_cancel_iff_of_pos h8] at h7\n --h7 : 2 * (p' * p') = q * q\n have h9 : nat_even (q * q) := Exists.intro (p' * p') h7.symm\n rewrite [sq_even_iff_even q] at h9 --h9 : nat_even q\n obtain (q' : Nat) (q'halfq : q = 2 * q') from h9\n have h10 : 2 * (p' * p') = 2 * (2 * (q' * q')) := by\n rewrite [h7, q'halfq]\n ring\n done\n rewrite [mul_left_cancel_iff_of_pos h8] at h10\n --h10 : p' * p' = 2 * (q' * q')\n have q'ne0 : q' ≠ 0 := by\n contradict qne0 with h11\n rewrite [q'halfq, h11]\n rfl\n done\n have q'inS : q' ∈ S := Exists.intro p' (And.intro h10 q'ne0)\n have qleq' : q ≤ q' := qleast q' q'inS\n rewrite [q'halfq] at qleq' --qleq' : 2 * q' ≤ q'\n contradict q'ne0\n linarith\n done\n\n\nExercises\n\n--Hint: Use Exercise_6_1_16a1 and Exercise_6_1_16a2\n--from the exercises of Section 6.1.\nlemma sq_even_iff_even (n : Nat) :\n nat_even (n * n) ↔ nat_even n := sorry\n\n\n--This theorem proves that the square root of 6 is irrational\ntheorem Exercise_6_4_4a :\n ¬∃ (q p : Nat), p * p = 6 * (q * q) ∧ q ≠ 0 := sorry\n\n\ntheorem Exercise_6_4_5 :\n ∀ n ≥ 12, ∃ (a b : Nat), 3 * a + 7 * b = n := sorry\n\n\ntheorem Exercise_6_4_7a : ∀ (n : Nat),\n (Sum i from 0 to n, Fib i) + 1 = Fib (n + 2) := sorry\n\n\ntheorem Exercise_6_4_7c : ∀ (n : Nat),\n Sum i from 0 to n, Fib (2 * i + 1) = Fib (2 * n + 2) := sorry\n\n\ntheorem Exercise_6_4_8a : ∀ (m n : Nat),\n Fib (m + n + 1) = Fib m * Fib n + Fib (m + 1) * Fib (n + 1) := sorry\n\n\ntheorem Exercise_6_4_8d : ∀ (m k : Nat), Fib m ∣ Fib (m * k) := sorry\nHint for #7: Let m be an arbitrary natural number, and then use induction on k. For the induction step, you must prove Fib m ∣ Fib (m * (k + 1)). If m = 0 ∨ k = 0, then this is easy. If not, then use exists_eq_add_one_of_ne_zero to obtain a natural number j such that m * k = j + 1, and therefore m * (k + 1) = j + m + 1, and then apply Exercise_6_4_8a.\n\n\ndef Fib_like (n : Nat) : Nat :=\n match n with\n | 0 => 1\n | 1 => 2\n | k + 2 => 2 * (Fib_like k) + Fib_like (k + 1)\n\ntheorem Fib_like_formula : ∀ (n : Nat), Fib_like n = 2 ^ n := sorry\n\n\ndef triple_rec (n : Nat) : Nat :=\n match n with\n | 0 => 0\n | 1 => 2\n | 2 => 4\n | k + 3 => 4 * triple_rec k +\n 6 * triple_rec (k + 1) + triple_rec (k + 2)\n\ntheorem triple_rec_formula :\n ∀ (n : Nat), triple_rec n = 2 ^ n * Fib n := sorry\n\n10. In this exercise you will prove that the numbers q and r in Example_6_4_1 are unique. It is helpful to prove a lemma first.\nlemma quot_rem_unique_lemma {m q r q' r' : Nat}\n (h1 : m * q + r = m * q' + r') (h2 : r' < m) : q ≤ q' := sorry\n\ntheorem quot_rem_unique (m q r q' r' : Nat)\n (h1 : m * q + r = m * q' + r') (h2 : r < m) (h3 : r' < m) :\n q = q' ∧ r = r' := sorry\n11. Use the theorem in the previous exercise to prove the following characterization of n / m and n % m.\ntheorem div_mod_char (m n q r : Nat)\n (h1 : n = m * q + r) (h2 : r < m) : q = n / m ∧ r = n % m := sorry" + "text": "6.4. Strong Induction\nIn the induction step of a proof by mathematical induction, we prove that a natural number has some property from the assumption that the previous number has the property. Section 6.4 of HTPI introduces a version of mathematical induction in which we get to assume that all smaller numbers have the property. Since this is a stronger assumption, this version of induction is called strong induction. Here is how strong induction works (HTPI p. 304):\n\nTo prove a goal of the form ∀ (n : Nat), P n:\n\nProve that ∀ (n : Nat), (∀ n_1 < n, P n_1) → P n.\n\nTo write a proof by strong induction in Lean, we use the tactic by_strong_induc, whose effect on the tactic state can be illustrated as follows.\n\n\n>> ⋮\n⊢ ∀ (n : Nat), P n\n\n\n>> ⋮\n⊢ ∀ (n : Nat),\n>> (∀ n_1 < n, P n_1) → P n\n\n\nTo illustrate this, we begin with Example 6.4.1 of HTPI.\ntheorem Example_6_4_1 : ∀ m > 0, ∀ (n : Nat),\n ∃ (q r : Nat), n = m * q + r ∧ r < m\nImitating the strategy of the proof in HTPI, we let m be an arbitrary natural number, assume m > 0, and then prove the statement ∀ (n : Nat), ∃ (q r : Nat), n = m * q + r ∧ r < m by strong induction. That means that after introducing an arbitrary natural number n, we assume the inductive hypothesis, which says ∀ n_1 < n, ∃ (q r : Nat), n_1 = m * q + r ∧ r < m.\ntheorem Example_6_4_1 : ∀ m > 0, ∀ (n : Nat),\n ∃ (q r : Nat), n = m * q + r ∧ r < m := by\n fix m : Nat\n assume h1 : m > 0\n by_strong_induc\n fix n : Nat\n assume ih : ∀ n_1 < n, ∃ (q r : Nat), n_1 = m * q + r ∧ r < m\n **done::\nOur goal now is to prove that ∃ (q r : Nat), n = m * q + r ∧ r < m. Although strong induction does not require a base case, it is not uncommon for proofs by strong induction to involve reasoning by cases. The proof in HTPI uses cases based on whether or not n < m. If n < m, then the proof is easy: the numbers q = 0 and r​ = n clearly have the required properties. If ¬n < m, then we can write n as n = k + m, for some natural number k. Since m > 0, we have k < n, so we can apply the inductive hypothesis to k. Notice that if m > 1, then k is not the number immediately preceding n; that’s why this proof uses strong induction rather than ordinary induction.\nHow do we come up with the number k in the previous paragraph? We’ll use a theorem from Lean’s library. There are two slightly different versions of the theorem—notice that the first ends with m + k and the second ends with k + m:\n\n@Nat.exists_eq_add_of_le : ∀ {m n : ℕ}, m ≤ n → ∃ (k : ℕ), n = m + k\n\n@Nat.exists_eq_add_of_le' : ∀ {m n : ℕ}, m ≤ n → ∃ (k : ℕ), n = k + m\n\nWe’ll use the second version in our proof.\ntheorem Example_6_4_1 : ∀ m > 0, ∀ (n : Nat),\n ∃ (q r : Nat), n = m * q + r ∧ r < m := by\n fix m : Nat\n assume h1 : m > 0\n by_strong_induc\n fix n : Nat\n assume ih : ∀ n_1 < n, ∃ (q r : Nat), n_1 = m * q + r ∧ r < m\n by_cases h2 : n < m\n · -- Case 1. h2 : n < m\n apply Exists.intro 0\n apply Exists.intro n --Goal : n = m * 0 + n ∧ n < m\n apply And.intro _ h2\n ring\n done\n · -- Case 2. h2 : ¬n < m\n have h3 : m ≤ n := by linarith\n obtain (k : Nat) (h4 : n = k + m) from Nat.exists_eq_add_of_le' h3\n have h5 : k < n := by linarith\n have h6 : ∃ (q r : Nat), k = m * q + r ∧ r < m := ih k h5\n obtain (q' : Nat)\n (h7 : ∃ (r : Nat), k = m * q' + r ∧ r < m) from h6\n obtain (r' : Nat) (h8 : k = m * q' + r' ∧ r' < m) from h7\n apply Exists.intro (q' + 1)\n apply Exists.intro r' --Goal : n = m * (q' + 1) + r' ∧ r' < m\n apply And.intro _ h8.right\n show n = m * (q' + 1) + r' from\n calc n\n _ = k + m := h4\n _ = m * q' + r' + m := by rw [h8.left]\n _ = m * (q' + 1) + r' := by ring\n done\n done\nThe numbers q and r in Example_6_4_1 are called the quotient and remainder when n is divided by m. Lean knows how to compute these numbers: if n and m are natural numbers, then in Lean, n / m denotes the quotient when n is divided by m, and n % m denotes the remainder. (The number n % m is also sometimes called n modulo m, or n mod m.) And Lean knows theorems stating that these numbers have the properties specified in Example_6_4_1:\n\n@Nat.div_add_mod : ∀ (m n : ℕ), n * (m / n) + m % n = m\n\n@Nat.mod_lt : ∀ (x : ℕ) {y : ℕ}, y > 0 → x % y < y\n\nBy the way, although we are unlikely to want to use the notation n / 0 or n % 0, Lean uses the definitions n / 0 = 0 and n % 0 = n. As a result, the equation n * (m / n) + m % n = m is true even if n = 0. That’s why the theorem Nat.div_add_mod doesn’t include a requirement that n > 0. It is important to keep in mind that division of natural numbers is not the same as division of real numbers. For example, dividing the natural number 5 by the natural number 2 gives a quotient of 2 (with a remainder of 1), so (5 : Nat) / (2 : Nat) is 2, but (5 : Real) / (2 : Real) is 2.5.\nThere is also a strong form of recursion. As an example of this, here is a recursive definition of a sequence of numbers called the Fibonacci numbers:\ndef Fib (n : Nat) : Nat :=\n match n with\n | 0 => 0\n | 1 => 1\n | k + 2 => Fib k + Fib (k + 1)\nNotice that the formula for Fib (k + 2) involves the two previous values of Fib, not just the immediately preceding value. That is the sense in which the recursion is strong. Not surprisingly, theorems about the Fibonacci numbers are often proven by induction—either ordinary or strong. We’ll illustrate this with a proof by strong induction that ∀ (n : Nat), Fib n < 2 ^ n. This time we’ll need to treat the cases n = 0 and n = 1 separately, since these values are treated separately in the definition of Fib n. And we’ll need to know that if n doesn’t fall into either of those cases, then it falls into the third case: n = k + 2 for some natural number k. Since similar ideas will come up several times in the rest of this book, it will be useful to begin by proving lemmas that will help with this kind of reasoning.\nWe’ll need two theorems from Lean’s library, the second of which has two slightly different versions:\n\n@Nat.pos_of_ne_zero : ∀ {n : ℕ}, n ≠ 0 → 0 < n\n\n@lt_of_le_of_ne : ∀ {α : Type u_1} [inst : PartialOrder α] {a b : α},\n a ≤ b → a ≠ b → a < b\n\n@lt_of_le_of_ne' : ∀ {α : Type u_1} [inst : PartialOrder α] {a b : α},\n a ≤ b → b ≠ a → a < b\n\nIf we have h1 : n ≠ 0, then Nat.pos_of_ne_zero h1 is a proof of 0 < n. But for natural numbers a and b, Lean treats a < b as meaning the same thing as a + 1 ≤ b, so this is also a proof of 1 ≤ n. If we also have h2 : n ≠ 1, then we can use lt_of_le_of_ne' to conclude 1 < n, which is definitionally equal to 2 ≤ n. Combining this reasoning with the theorem Nat.exists_eq_add_of_le', which we used in the last example, we can prove two lemmas that will be helpful for reasoning in which the first one or two natural numbers have to be treated separately.\nlemma exists_eq_add_one_of_ne_zero {n : Nat}\n (h1 : n ≠ 0) : ∃ (k : Nat), n = k + 1 := by\n have h2 : 1 ≤ n := Nat.pos_of_ne_zero h1\n show ∃ (k : Nat), n = k + 1 from Nat.exists_eq_add_of_le' h2\n done\n\ntheorem exists_eq_add_two_of_ne_zero_one {n : Nat}\n (h1 : n ≠ 0) (h2 : n ≠ 1) : ∃ (k : Nat), n = k + 2 := by\n have h3 : 1 ≤ n := Nat.pos_of_ne_zero h1\n have h4 : 2 ≤ n := lt_of_le_of_ne' h3 h2\n show ∃ (k : Nat), n = k + 2 from Nat.exists_eq_add_of_le' h4\n done\nWith this preparation, we can present the proof:\nexample : ∀ (n : Nat), Fib n < 2 ^ n := by\n by_strong_induc\n fix n : Nat\n assume ih : ∀ n_1 < n, Fib n_1 < 2 ^ n_1\n by_cases h1 : n = 0\n · -- Case 1. h1 : n = 0\n rewrite [h1] --Goal : Fib 0 < 2 ^ 0\n decide\n done\n · -- Case 2. h1 : ¬n = 0\n by_cases h2 : n = 1\n · -- Case 2.1. h2 : n = 1\n rewrite [h2]\n decide\n done\n · -- Case 2.2. h2 : ¬n = 1\n obtain (k : Nat) (h3 : n = k + 2) from\n exists_eq_add_two_of_ne_zero_one h1 h2\n have h4 : k < n := by linarith\n have h5 : Fib k < 2 ^ k := ih k h4\n have h6 : k + 1 < n := by linarith\n have h7 : Fib (k + 1) < 2 ^ (k + 1) := ih (k + 1) h6\n rewrite [h3] --Goal : Fib (k + 2) < 2 ^ (k + 2)\n show Fib (k + 2) < 2 ^ (k + 2) from\n calc Fib (k + 2)\n _ = Fib k + Fib (k + 1) := by rfl\n _ < 2 ^ k + Fib (k + 1) := by rel [h5]\n _ < 2 ^ k + 2 ^ (k + 1) := by rel [h7]\n _ ≤ 2 ^ k + 2 ^ (k + 1) + 2 ^ k := by linarith\n _ = 2 ^ (k + 2) := by ring\n done\n done\n done\nAs with ordinary induction, strong induction can be useful for proving statements that do not at first seem to have the form ∀ (n : Nat), .... To illustrate this, we’ll prove the well-ordering principle, which says that if a set S : Set Nat is nonempty, then it has a smallest element. We’ll prove the contrapositive: if S has no smallest element, then it is empty. To say that S is empty means ∀ (n : Nat), n ∉ S, and that’s the statement to which we will apply strong induction.\ntheorem well_ord_princ (S : Set Nat) : (∃ (n : Nat), n ∈ S) →\n ∃ n ∈ S, ∀ m ∈ S, n ≤ m := by\n contrapos\n assume h1 : ¬∃ n ∈ S, ∀ m ∈ S, n ≤ m\n quant_neg --Goal : ∀ (n : Nat), n ∉ S\n by_strong_induc\n fix n : Nat\n assume ih : ∀ n_1 < n, n_1 ∉ S --Goal : n ∉ S\n contradict h1 with h2 --h2 : n ∈ S\n --Goal : ∃ n ∈ S, ∀ m ∈ S, n ≤ m\n apply Exists.intro n --Goal : n ∈ S ∧ ∀ m ∈ S, n ≤ m\n apply And.intro h2 --Goal : ∀ m ∈ S, n ≤ m\n fix m : Nat\n assume h3 : m ∈ S\n have h4 : m < n → m ∉ S := ih m\n contrapos at h4 --h4 : m ∈ S → ¬m < n\n have h5 : ¬m < n := h4 h3\n linarith\n done\nSection 6.4 of HTPI ends with an example of an application of the well-ordering principle. The example gives a proof that \\(\\sqrt{2}\\) is irrational. If \\(\\sqrt{2}\\) were rational, then there would be natural numbers \\(p\\) and \\(q\\) such that \\(q \\ne 0\\) and \\(p/q = \\sqrt{2}\\), and therefore \\(p^2 = 2q^2\\). So we can prove that \\(\\sqrt{2}\\) is irrational by showing that there do not exist natural numbers \\(p\\) and \\(q\\) such that \\(q \\ne 0\\) and \\(p^2 = 2q^2\\).\nThe proof uses a definition from the exercises of Section 6.1:\ndef nat_even (n : Nat) : Prop := ∃ (k : Nat), n = 2 * k\nWe will also use the following lemma, whose proof we leave as an exercise for you:\nlemma sq_even_iff_even (n : Nat) : nat_even (n * n) ↔ nat_even n := sorry\nAnd we’ll need another theorem that we haven’t seen before:\n\n@mul_left_cancel_iff_of_pos : ∀ {α : Type u_1} {a b c : α}\n [inst : MulZeroClass α] [inst_1 : PartialOrder α]\n [inst_2 : PosMulReflectLE α],\n 0 < a → (a * b = a * c ↔ b = c)\n\nTo show that \\(\\sqrt{2}\\) is irrational, we will prove the statement\n\n¬∃ (q p : Nat), p * p = 2 * (q * q) ∧ q ≠ 0\n\nWe proceed by contradiction. If this statement were false, then the set\n\nS = {q : Nat | ∃ (p : Nat), p * p = 2 * (q * q) ∧ q ≠ 0}\n\nwould be nonempty, and therefore, by the well-ordering principle, it would have a smallest element. We then show that this leads to a contradiction. Here is the proof.\ntheorem Theorem_6_4_5 :\n ¬∃ (q p : Nat), p * p = 2 * (q * q) ∧ q ≠ 0 := by\n set S : Set Nat :=\n {q : Nat | ∃ (p : Nat), p * p = 2 * (q * q) ∧ q ≠ 0}\n by_contra h1\n have h2 : ∃ (q : Nat), q ∈ S := h1\n have h3 : ∃ q ∈ S, ∀ r ∈ S, q ≤ r := well_ord_princ S h2\n obtain (q : Nat) (h4 : q ∈ S ∧ ∀ r ∈ S, q ≤ r) from h3\n have qinS : q ∈ S := h4.left\n have qleast : ∀ r ∈ S, q ≤ r := h4.right\n define at qinS --qinS : ∃ (p : Nat), p * p = 2 * (q * q) ∧ q ≠ 0\n obtain (p : Nat) (h5 : p * p = 2 * (q * q) ∧ q ≠ 0) from qinS\n have pqsqrt2 : p * p = 2 * (q * q) := h5.left\n have qne0 : q ≠ 0 := h5.right\n have h6 : nat_even (p * p) := Exists.intro (q * q) pqsqrt2\n rewrite [sq_even_iff_even p] at h6 --h6 : nat_even p\n obtain (p' : Nat) (p'halfp : p = 2 * p') from h6\n have h7 : 2 * (2 * (p' * p')) = 2 * (q * q) := by\n rewrite [←pqsqrt2, p'halfp]\n ring\n done\n have h8 : 2 > 0 := by decide\n rewrite [mul_left_cancel_iff_of_pos h8] at h7\n --h7 : 2 * (p' * p') = q * q\n have h9 : nat_even (q * q) := Exists.intro (p' * p') h7.symm\n rewrite [sq_even_iff_even q] at h9 --h9 : nat_even q\n obtain (q' : Nat) (q'halfq : q = 2 * q') from h9\n have h10 : 2 * (p' * p') = 2 * (2 * (q' * q')) := by\n rewrite [h7, q'halfq]\n ring\n done\n rewrite [mul_left_cancel_iff_of_pos h8] at h10\n --h10 : p' * p' = 2 * (q' * q')\n have q'ne0 : q' ≠ 0 := by\n contradict qne0 with h11\n rewrite [q'halfq, h11]\n rfl\n done\n have q'inS : q' ∈ S := Exists.intro p' (And.intro h10 q'ne0)\n have qleq' : q ≤ q' := qleast q' q'inS\n rewrite [q'halfq] at qleq' --qleq' : 2 * q' ≤ q'\n contradict q'ne0\n linarith\n done\n\n\nExercises\n\n--Hint: Use Exercise_6_1_16a1 and Exercise_6_1_16a2\n--from the exercises of Section 6.1.\nlemma sq_even_iff_even (n : Nat) :\n nat_even (n * n) ↔ nat_even n := sorry\n\n\n--This theorem proves that the square root of 6 is irrational\ntheorem Exercise_6_4_4a :\n ¬∃ (q p : Nat), p * p = 6 * (q * q) ∧ q ≠ 0 := sorry\n\n\ntheorem Exercise_6_4_5 :\n ∀ n ≥ 12, ∃ (a b : Nat), 3 * a + 7 * b = n := sorry\n\n\ntheorem Exercise_6_4_7a : ∀ (n : Nat),\n (Sum i from 0 to n, Fib i) + 1 = Fib (n + 2) := sorry\n\n\ntheorem Exercise_6_4_7c : ∀ (n : Nat),\n Sum i from 0 to n, Fib (2 * i + 1) = Fib (2 * n + 2) := sorry\n\n\ntheorem Exercise_6_4_8a : ∀ (m n : Nat),\n Fib (m + n + 1) = Fib m * Fib n + Fib (m + 1) * Fib (n + 1) := sorry\n\n\ntheorem Exercise_6_4_8d : ∀ (m k : Nat), Fib m ∣ Fib (m * k) := sorry\nHint for #7: Let m be an arbitrary natural number, and then use induction on k. For the induction step, you must prove Fib m ∣ Fib (m * (k + 1)). If m = 0 ∨ k = 0, then this is easy. If not, then use exists_eq_add_one_of_ne_zero to obtain a natural number j such that m * k = j + 1, and therefore m * (k + 1) = j + m + 1, and then apply Exercise_6_4_8a.\n\n\ndef Fib_like (n : Nat) : Nat :=\n match n with\n | 0 => 1\n | 1 => 2\n | k + 2 => 2 * (Fib_like k) + Fib_like (k + 1)\n\ntheorem Fib_like_formula : ∀ (n : Nat), Fib_like n = 2 ^ n := sorry\n\n\ndef triple_rec (n : Nat) : Nat :=\n match n with\n | 0 => 0\n | 1 => 2\n | 2 => 4\n | k + 3 => 4 * triple_rec k +\n 6 * triple_rec (k + 1) + triple_rec (k + 2)\n\ntheorem triple_rec_formula :\n ∀ (n : Nat), triple_rec n = 2 ^ n * Fib n := sorry\n\n10. In this exercise you will prove that the numbers q and r in Example_6_4_1 are unique. It is helpful to prove a lemma first.\nlemma quot_rem_unique_lemma {m q r q' r' : Nat}\n (h1 : m * q + r = m * q' + r') (h2 : r' < m) : q ≤ q' := sorry\n\ntheorem quot_rem_unique (m q r q' r' : Nat)\n (h1 : m * q + r = m * q' + r') (h2 : r < m) (h3 : r' < m) :\n q = q' ∧ r = r' := sorry\n11. Use the theorem in the previous exercise to prove the following characterization of n / m and n % m.\ntheorem div_mod_char (m n q r : Nat)\n (h1 : n = m * q + r) (h2 : r < m) : q = n / m ∧ r = n % m := sorry" }, { "objectID": "Chap6.html#closures-again", @@ -284,21 +284,21 @@ "href": "Chap7.html", "title": "7  Number Theory", "section": "", - "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\bigtriangleup}\n$$" + "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\mathbin{∆}}\n$$" }, { "objectID": "Chap7.html#greatest-common-divisors", "href": "Chap7.html#greatest-common-divisors", "title": "7  Number Theory", "section": "7.1. Greatest Common Divisors", - "text": "7.1. Greatest Common Divisors\nThe proofs in this chapter and the next are significantly longer than those in previous chapters. As a result, we will skip some details in the text, leaving proofs of a number of theorems as exercises for you. The most interesting of these exercises are included in the exercise lists at the ends of the sections; for the rest, you can compare your solutions to proofs that can be found in the Lean package that accompanies this book. Also, we will occasionally use theorems that we have not used before without explanation. If necessary, you can use #check to look up what they say.\nSection 7.1 of HTPI introduces the Euclidean algorithm for computing the greatest common divisor (gcd) of two positive integers \\(a\\) and \\(b\\). The motivation for the algorithm is the fact that if \\(r\\) is the remainder when \\(a\\) is divided by \\(b\\), then any natural number that divides both \\(a\\) and \\(b\\) also divides \\(r\\), and any natural number that divides both \\(b\\) and \\(r\\) also divides \\(a\\).\nLet’s prove these statements in Lean. Recall that in Lean, the remainder when a is divided by b is called a mod b, and it is denoted a % b. We’ll prove the first statement, and leave the second as an exercise for you. It will be convenient for our work with greatest common divisors in Lean to let a and b be natural numbers rather than positive integers (thus allowing either of them to be zero).\ntheorem dvd_mod_of_dvd_a_b {a b d : Nat}\n (h1 : d ∣ a) (h2 : d ∣ b) : d ∣ (a % b) := by\n set q : Nat := a / b\n have h3 : b * q + a % b = a := Nat.div_add_mod a b\n obtain (j : Nat) (h4 : a = d * j) from h1\n obtain (k : Nat) (h5 : b = d * k) from h2\n define --Goal : ∃ (c : Nat), a % b = d * c\n apply Exists.intro (j - k * q)\n show a % b = d * (j - k * q) from\n calc a % b\n _ = b * q + a % b - b * q := (Nat.add_sub_cancel_left _ _).symm\n _ = a - b * q := by rw [h3]\n _ = d * j - d * (k * q) := by rw [h4, h5, mul_assoc]\n _ = d * (j - k * q) := (Nat.mul_sub_left_distrib _ _ _).symm\n done\n\ntheorem dvd_a_of_dvd_b_mod {a b d : Nat}\n (h1 : d ∣ b) (h2 : d ∣ (a % b)) : d ∣ a := sorry\nThese theorems tell us that the gcd of a and b is the same as the gcd of b and a % b, which suggests that the following recursive definition should compute the gcd of a and b:\ndef **gcd:: (a b : Nat) : Nat :=\n match b with\n | 0 => a\n | n + 1 => gcd (n + 1) (a % (n + 1))\nUnfortunately, Lean puts a red squiggle under gcd, and it displays in the Infoview a long error message that begins fail to show termination. What is Lean complaining about?\nThe problem is that recursive definitions are dangerous. To understand the danger, consider the following recursive definition:\ndef loop (n : Nat) : Nat := loop (n + 1)\nSuppose we try to use this definition to compute loop 3. The definition would lead us to perform the following calculation:\n\nloop 3 = loop 4 = loop 5 = loop 6 = ...\n\nClearly this calculation will go on forever and will never produce an answer. So the definition of loop does not actually succeed in defining a function from Nat to Nat.\nLean insists that recursive definitions must avoid such nonterminating calculations. Why did it accept all of our previous recursive definitions? The reason is that in each case, the definition of the value of the function at a natural number n referred only to values of the function at numbers smaller than n. Since a decreasing list of natural numbers cannot go on forever, such definitions lead to calculations that are guaranteed to terminate.\nWhat about our recursive definition of gcd a b? This function has two arguments, a and b, and when b = n + 1, the definition asks us to compute gcd (n + 1) (a % (n + 1)). The first argument here could actually be larger than the first argument in the value we are trying to compute, gcd a b. But the second argument will always be smaller, and that will suffice to guarantee that the calculation terminates. We can tell Lean to focus on the second argument by adding a termination_by clause to the end of our recursive definition:\ndef gcd (a b : Nat) : Nat :=\n match b with\n | 0 => a\n | n + 1 => **gcd (n + 1) (a % (n + 1))::\n termination_by gcd a b => b\nUnfortunately, Lean still isn’t satisfied, but the error message this time is more helpful. The message says that Lean failed to prove termination, and at the end of the message it says that the goal it failed to prove is a % (n + 1) < Nat.succ n. Here Nat.succ n denotes the successor of n—that is, n + 1—so Lean was trying to prove that a % (n + 1) < n + 1, which is precisely what is needed to show that the second argument of gcd (n + 1) (a % (n + 1)) is smaller than the second argument of gcd a b when b = n + 1. We’ll need to provide a proof of this goal to convince Lean to accept our recursive definition. Fortunately, it’s not hard to prove:\nlemma mod_succ_lt (a n : Nat) : a % (n + 1) < n + 1 := by\n have h : n + 1 > 0 := Nat.succ_pos n\n show a % (n + 1) < n + 1 from Nat.mod_lt a h\n done\nLean’s error message suggests several ways to fix the problem with our recursive definition. We’ll use the first suggestion: Use `have` expressions to prove the remaining goals. Here, finally, is the definition of gcd that Lean is willing to accept:\ndef gcd (a b : Nat) : Nat :=\n match b with\n | 0 => a\n | n + 1 =>\n have : a % (n + 1) < n + 1 := mod_succ_lt a n\n gcd (n + 1) (a % (n + 1))\n termination_by gcd a b => b\nNotice that in the have expression, we have not bothered to specify an identifier for the assertion being proven, since we never need to refer to it. Let’s try out our gcd function:\n++#eval:: gcd 672 161 --Answer: 7. Note 672 = 7 * 96 and 161 = 7 * 23.\nTo establish the main properties of gcd a b we’ll need several lemmas. We prove some of them and leave others as exercises.\nlemma gcd_base (a : Nat) : gcd a 0 = a := by rfl\n\nlemma gcd_nonzero (a : Nat) {b : Nat} (h : b ≠ 0) :\n gcd a b = gcd b (a % b) := by\n obtain (n : Nat) (h2 : b = n + 1) from exists_eq_add_one_of_ne_zero h\n rewrite [h2] --Goal : gcd a (n + 1) = gcd (n + 1) (a % (n + 1))\n rfl\n done\n\nlemma mod_nonzero_lt (a : Nat) {b : Nat} (h : b ≠ 0) : a % b < b := sorry\n\nlemma dvd_self (n : Nat) : n ∣ n := sorry\nOne of the most important properties of gcd a b is that it divides both a and b. We prove it by strong induction on b.\ntheorem gcd_dvd : ∀ (b a : Nat), (gcd a b) ∣ a ∧ (gcd a b) ∣ b := by\n by_strong_induc\n fix b : Nat\n assume ih : ∀ b_1 < b, ∀ (a : Nat), (gcd a b_1) ∣ a ∧ (gcd a b_1) ∣ b_1\n fix a : Nat\n by_cases h1 : b = 0\n · -- Case 1. h1 : b = 0\n rewrite [h1, gcd_base] --Goal: a ∣ a ∧ a ∣ 0\n apply And.intro (dvd_self a)\n define\n apply Exists.intro 0\n rfl\n done\n · -- Case 2. h1 : b ≠ 0\n rewrite [gcd_nonzero a h1]\n --Goal : gcd b (a % b) ∣ a ∧ gcd b (a % b) ∣ b\n have h2 : a % b < b := mod_nonzero_lt a h1\n have h3 : (gcd b (a % b)) ∣ b ∧ (gcd b (a % b)) ∣ (a % b) :=\n ih (a % b) h2 b\n apply And.intro _ h3.left\n show (gcd b (a % b)) ∣ a from dvd_a_of_dvd_b_mod h3.left h3.right\n done\n done\nYou may wonder why we didn’t start the proof like this:\ntheorem gcd_dvd : ∀ (a b : Nat), (gcd a b) ∣ a ∧ (gcd a b) ∣ b := by\n fix a : Nat\n by_strong_induc\n fix b : Nat\n assume ih : ∀ b_1 < b, (gcd a b_1) ∣ a ∧ (gcd a b_1) ∣ b_1\nIn fact, this approach wouldn’t have worked. It is an interesting exercise to try to complete this version of the proof and see why it fails.\nAnother interesting question is why we asserted both (gcd a b) ∣ a and (gcd a b) ∣ b in the same theorem. Wouldn’t it have been easier to give separate proofs of the statements ∀ (b a : Nat), (gcd a b) ∣ a and ∀ (b a : Nat), (gcd a b) ∣ b? Again, you might find it enlightening to see why that wouldn’t have worked. However, now that we have proven both divisibility statements, we can state them as separate theorems:\ntheorem gcd_dvd_left (a b : Nat) : (gcd a b) ∣ a := (gcd_dvd b a).left\n\ntheorem gcd_dvd_right (a b : Nat) : (gcd a b) ∣ b := (gcd_dvd b a).right\nNext we turn to Theorem 7.1.4 in HTPI, which says that there are integers \\(s\\) and \\(t\\) such that \\(\\text{gcd}(a, b) = s a + t b\\). (We say that \\(\\text{gcd}(a, b)\\) can be written as a linear combination of \\(a\\) and \\(b\\).) In HTPI, this is proven by using an extended version of the Euclidean algorithm to compute the coefficients \\(s\\) and \\(t\\). Here we will use a different recursive procedure to compute \\(s\\) and \\(t\\). If \\(b = 0\\), then \\(\\text{gcd}(a, b) = a = 1 \\cdot a + 0 \\cdot b\\), so we can use the values \\(s = 1\\) and \\(t = 0\\). Otherwise, let \\(q\\) and \\(r\\) be the quotient and remainder when \\(a\\) is divided by \\(b\\). Then \\(a = bq + r\\) and \\(\\text{gcd}(a, b) = \\text{gcd}(b, r)\\). Now suppose that we have already computed integers \\(s'\\) and \\(t'\\) such that \\[\n\\text{gcd}(b, r) = s' b + t' r.\n\\] Then \\[\\begin{align*}\n\\text{gcd}(a, b) &= \\text{gcd}(b, r) = s' b + t' r\\\\\n&= s' b + t' (a - bq) = t' a + (s' - t'q)b.\n\\end{align*}\\] Thus, to write \\(\\text{gcd}(a, b) = s a + t b\\) we can use the values \\[\\begin{equation}\\tag{$*$}\ns = t', \\qquad t = s' - t'q.\n\\end{equation}\\]\nWe will use these equations as the basis for recursive definitions of Lean functions gcd_c1 and gcd_c2 such that the required coefficients can be obtained from the formulas s = gcd_c1 a b and t = gcd_c2 a b. Notice that s and t could be negative, so they must have type Int, not Nat. As a result, in definitions and theorems involving gcd_c1 and gcd_c2 we will sometimes have to deal with coercion of natural numbers to integers.\nThe functions gcd_c1 and gcd_c2 will be mutually recursive; in other words, each will be defined not only in terms of itself but also in terms of the other. Fortunately, Lean allows for such mutual recursion. Here are the definitions we will use.\nmutual\n def gcd_c1 (a b : Nat) : Int :=\n match b with\n | 0 => 1\n | n + 1 => \n have : a % (n + 1) < n + 1 := mod_succ_lt a n\n gcd_c2 (n + 1) (a % (n + 1))\n --Corresponds to s = t' in (*)\n\n def gcd_c2 (a b : Nat) : Int :=\n match b with\n | 0 => 0\n | n + 1 =>\n have : a % (n + 1) < n + 1 := mod_succ_lt a n\n gcd_c1 (n + 1) (a % (n + 1)) -\n (gcd_c2 (n + 1) (a % (n + 1))) * ↑(a / (n + 1))\n --Corresponds to t = s' - t'q in (*)\nend\n termination_by\n gcd_c1 a b => b\n gcd_c2 a b => b\nNotice that in the definition of gcd_c2, the quotient a / (n + 1) is computed using natural-number division, but it is then coerced to be an integer so that it can be multiplied by the integer gcd_c2 (n + 1) (a % (n + 1)).\nOur main theorem about these functions is that they give the coefficients needed to write gcd a b as a linear combination of a and b. As usual, stating a few lemmas first helps with the proof. We leave the proofs of two of them as exercises for you (hint: imitate the proof of gcd_nonzero above).\nlemma gcd_c1_base (a : Nat) : gcd_c1 a 0 = 1 := by rfl\n\nlemma gcd_c1_nonzero (a : Nat) {b : Nat} (h : b ≠ 0) :\n gcd_c1 a b = gcd_c2 b (a % b) := sorry\n\nlemma gcd_c2_base (a : Nat) : gcd_c2 a 0 = 0 := by rfl\n\nlemma gcd_c2_nonzero (a : Nat) {b : Nat} (h : b ≠ 0) :\n gcd_c2 a b = gcd_c1 b (a % b) - (gcd_c2 b (a % b)) * ↑(a / b) := sorry\nWith that preparation, we are ready to prove that gcd_c1 a b and gcd_c2 a b give coefficients for expressing gcd a b as a linear combination of a and b. Of course, the theorem is proven by strong induction. For clarity, we’ll write the coercions explicitly in this proof. We’ll make a few comments after the proof that may help you follow the details.\ntheorem gcd_lin_comb : ∀ (b a : Nat),\n (gcd_c1 a b) * ↑a + (gcd_c2 a b) * ↑b = ↑(gcd a b) := by\n by_strong_induc\n fix b : Nat\n assume ih : ∀ b_1 < b, ∀ (a : Nat),\n (gcd_c1 a b_1) * ↑a + (gcd_c2 a b_1) * ↑b_1 = ↑(gcd a b_1)\n fix a : Nat\n by_cases h1 : b = 0\n · -- Case 1. h1 : b = 0\n rewrite [h1, gcd_c1_base, gcd_c2_base, gcd_base]\n --Goal : 1 * ↑a + 0 * ↑0 = ↑a\n ring\n done\n · -- Case 2. h1 : b ≠ 0\n rewrite [gcd_c1_nonzero a h1, gcd_c2_nonzero a h1, gcd_nonzero a h1]\n --Goal : gcd_c2 b (a % b) * ↑a +\n -- (gcd_c1 b (a % b) - gcd_c2 b (a % b) * ↑(a / b)) * ↑b =\n -- ↑(gcd b (a % b))\n set r : Nat := a % b\n set q : Nat := a / b\n set s : Int := gcd_c1 b r\n set t : Int := gcd_c2 b r\n --Goal : t * ↑a + (s - t * ↑q) * ↑b = ↑(gcd b r)\n have h2 : r < b := mod_nonzero_lt a h1\n have h3 : s * ↑b + t * ↑r = ↑(gcd b r) := ih r h2 b\n have h4 : b * q + r = a := Nat.div_add_mod a b\n rewrite [←h3, ←h4]\n rewrite [Nat.cast_add, Nat.cast_mul]\n --Goal : t * (↑b * ↑q + ↑r) + (s - t * ↑q) * ↑b = s * ↑b + t * ↑r\n ring\n done\n done\nIn case 2, we have introduced the variables r, q, s, and t to simplify the notation. Notice that the set tactic automatically plugs in this notation in the goal. After the step rewrite [←h3, ←h4], the goal contains the expression ↑(b * q + r). You can use the #check command to see why Nat.cast_add and Nat.cast_mul convert this expression to first ↑(b * q) + ↑r and then ↑b * ↑q + ↑r. Without those steps, the ring tactic would not have been able to complete the proof.\nWe can try out the functions gcd_c1 and gcd_c2 as follows:\n++#eval:: gcd_c1 672 161 --Answer: 6\n++#eval:: gcd_c2 672 161 --Answer: -25\n --Note 6 * 672 - 25 * 161 = 4032 - 4025 = 7 = gcd 672 161\nFinally, we turn to Theorem 7.1.6 in HTPI, which expresses one of the senses in which gcd a b is the greatest common divisor of a and b. Our proof follows the strategy of the proof in HTPI, with one additional step: we begin by using the theorem Int.coe_nat_dvd to change the goal from d ∣ gcd a b to ↑d ∣ ↑(gcd a b) (where the coercions are from Nat to Int), so that the rest of the proof can work with integer algebra rather than natural-number algebra.\ntheorem Theorem_7_1_6 {d a b : Nat} (h1 : d ∣ a) (h2 : d ∣ b) :\n d ∣ gcd a b := by\n rewrite [←Int.coe_nat_dvd] --Goal : ↑d ∣ ↑(gcd a b)\n set s : Int := gcd_c1 a b\n set t : Int := gcd_c2 a b\n have h3 : s * ↑a + t * ↑b = ↑(gcd a b) := gcd_lin_comb b a\n rewrite [←h3] --Goal : ↑d ∣ s * ↑a + t * ↑b\n obtain (j : Nat) (h4 : a = d * j) from h1\n obtain (k : Nat) (h5 : b = d * k) from h2\n rewrite [h4, h5, Nat.cast_mul, Nat.cast_mul]\n --Goal : ↑d ∣ s * (↑d * ↑j) + t * (↑d * ↑k)\n define\n apply Exists.intro (s * ↑j + t * ↑k)\n ring\n done\nWe will ask you in the exercises to prove that, among the common divisors of a and b, gcd a b is the greatest with respect to the usual ordering of the natural numbers (as long as gcd a b ≠ 0).\n\nExercises\n\ntheorem dvd_a_of_dvd_b_mod {a b d : Nat}\n (h1 : d ∣ b) (h2 : d ∣ (a % b)) : d ∣ a := sorry\n\n\nlemma gcd_comm_lt {a b : Nat} (h : a < b) : gcd a b = gcd b a := sorry\n\ntheorem gcd_comm (a b : Nat) : gcd a b = gcd b a := sorry\n\n\ntheorem Exercise_7_1_5 (a b : Nat) (n : Int) :\n (∃ (s t : Int), s * a + t * b = n) ↔ (↑(gcd a b) : Int) ∣ n := sorry\n\n\ntheorem Exercise_7_1_6 (a b c : Nat) :\n gcd a b = gcd (a + b * c) b := sorry\n\n\ntheorem gcd_is_nonzero {a b : Nat} (h : a ≠ 0 ∨ b ≠ 0) :\n gcd a b ≠ 0 := sorry\n\n\ntheorem gcd_greatest {a b d : Nat} (h1 : gcd a b ≠ 0)\n (h2 : d ∣ a) (h3 : d ∣ b) : d ≤ gcd a b := sorry\n\n\nlemma Lemma_7_1_10a {a b : Nat}\n (n : Nat) (h : a ∣ b) : (n * a) ∣ (n * b) := sorry\n\nlemma Lemma_7_1_10b {a b n : Nat}\n (h1 : n ≠ 0) (h2 : (n * a) ∣ (n * b)) : a ∣ b := sorry\n\nlemma Lemma_7_1_10c {a b : Nat}\n (h1 : a ∣ b) (h2 : b ∣ a) : a = b := sorry\n\ntheorem Exercise_7_1_10 (a b n : Nat) :\n gcd (n * a) (n * b) = n * gcd a b := sorry" + "text": "7.1. Greatest Common Divisors\nThe proofs in this chapter and the next are significantly longer than those in previous chapters. As a result, we will skip some details in the text, leaving proofs of a number of theorems as exercises for you. The most interesting of these exercises are included in the exercise lists at the ends of the sections; for the rest, you can compare your solutions to proofs that can be found in the Lean package that accompanies this book. Also, we will occasionally use theorems that we have not used before without explanation. If necessary, you can use #check to look up what they say.\nSection 7.1 of HTPI introduces the Euclidean algorithm for computing the greatest common divisor (gcd) of two positive integers \\(a\\) and \\(b\\). The motivation for the algorithm is the fact that if \\(r\\) is the remainder when \\(a\\) is divided by \\(b\\), then any natural number that divides both \\(a\\) and \\(b\\) also divides \\(r\\), and any natural number that divides both \\(b\\) and \\(r\\) also divides \\(a\\).\nLet’s prove these statements in Lean. Recall that in Lean, the remainder when a is divided by b is called a mod b, and it is denoted a % b. We’ll prove the first statement, and leave the second as an exercise for you. It will be convenient for our work with greatest common divisors in Lean to let a and b be natural numbers rather than positive integers (thus allowing either of them to be zero).\ntheorem dvd_mod_of_dvd_a_b {a b d : Nat}\n (h1 : d ∣ a) (h2 : d ∣ b) : d ∣ (a % b) := by\n set q : Nat := a / b\n have h3 : b * q + a % b = a := Nat.div_add_mod a b\n obtain (j : Nat) (h4 : a = d * j) from h1\n obtain (k : Nat) (h5 : b = d * k) from h2\n define --Goal : ∃ (c : Nat), a % b = d * c\n apply Exists.intro (j - k * q)\n show a % b = d * (j - k * q) from\n calc a % b\n _ = b * q + a % b - b * q := (Nat.add_sub_cancel_left _ _).symm\n _ = a - b * q := by rw [h3]\n _ = d * j - d * (k * q) := by rw [h4, h5, mul_assoc]\n _ = d * (j - k * q) := (Nat.mul_sub_left_distrib _ _ _).symm\n done\n\ntheorem dvd_a_of_dvd_b_mod {a b d : Nat}\n (h1 : d ∣ b) (h2 : d ∣ (a % b)) : d ∣ a := sorry\nThese theorems tell us that the gcd of a and b is the same as the gcd of b and a % b, which suggests that the following recursive definition should compute the gcd of a and b:\ndef gcd (a b : Nat) : Nat :=\n match b with\n | 0 => a\n | n + 1 => **gcd (n + 1) (a % (n + 1))::\nUnfortunately, Lean puts a red squiggle under gcd (n + 1) (a % (n + 1)), and it displays in the Infoview a long error message that begins fail to show termination. What is Lean complaining about?\nThe problem is that recursive definitions are dangerous. To understand the danger, consider the following recursive definition:\ndef loop (n : Nat) : Nat := loop (n + 1)\nSuppose we try to use this definition to compute loop 3. The definition would lead us to perform the following calculation:\n\nloop 3 = loop 4 = loop 5 = loop 6 = ...\n\nClearly this calculation will go on forever and will never produce an answer. So the definition of loop does not actually succeed in defining a function from Nat to Nat.\nLean insists that recursive definitions must avoid such nonterminating calculations. Why did it accept all of our previous recursive definitions? The reason is that in each case, the definition of the value of the function at a natural number n referred only to values of the function at numbers smaller than n. Since a decreasing list of natural numbers cannot go on forever, such definitions lead to calculations that are guaranteed to terminate.\nWhat about our recursive definition of gcd a b? This function has two arguments, a and b, and when b = n + 1, the definition asks us to compute gcd (n + 1) (a % (n + 1)). The first argument here could actually be larger than the first argument in the value we are trying to compute, gcd a b. But the second argument will always be smaller, and that will suffice to guarantee that the calculation terminates. We can tell Lean to focus on the second argument b by adding a termination_by clause to the end of our recursive definition:\ndef gcd (a b : Nat) : Nat :=\n match b with\n | 0 => a\n | n + 1 => **gcd (n + 1) (a % (n + 1))::\n termination_by b\nUnfortunately, Lean still isn’t satisfied, but the error message this time is more helpful. The message says that Lean failed to prove termination, and at the end of the message it says that the goal it failed to prove is a % (n + 1) < n + 1, which is precisely what is needed to show that the second argument of gcd (n + 1) (a % (n + 1)) is smaller than the second argument of gcd a b when b = n + 1. We’ll need to provide a proof of this goal to convince Lean to accept our recursive definition. Fortunately, it’s not hard to prove:\nlemma mod_succ_lt (a n : Nat) : a % (n + 1) < n + 1 := by\n have h : n + 1 > 0 := Nat.succ_pos n\n show a % (n + 1) < n + 1 from Nat.mod_lt a h\n done\nLean’s error message suggests several ways to fix the problem with our recursive definition. We’ll use the first suggestion: Use `have` expressions to prove the remaining goals. Here, finally, is the definition of gcd that Lean is willing to accept:\ndef gcd (a b : Nat) : Nat :=\n match b with\n | 0 => a\n | n + 1 =>\n have : a % (n + 1) < n + 1 := mod_succ_lt a n\n gcd (n + 1) (a % (n + 1))\n termination_by b\nNotice that in the have expression, we have not bothered to specify an identifier for the assertion being proven, since we never need to refer to it. Let’s try out our gcd function:\n++#eval:: gcd 672 161 --Answer: 7. Note 672 = 7 * 96 and 161 = 7 * 23.\nTo establish the main properties of gcd a b we’ll need several lemmas. We prove some of them and leave others as exercises.\nlemma gcd_base (a : Nat) : gcd a 0 = a := by rfl\n\nlemma gcd_nonzero (a : Nat) {b : Nat} (h : b ≠ 0) :\n gcd a b = gcd b (a % b) := by\n obtain (n : Nat) (h2 : b = n + 1) from exists_eq_add_one_of_ne_zero h\n rewrite [h2] --Goal : gcd a (n + 1) = gcd (n + 1) (a % (n + 1))\n rfl\n done\n\nlemma mod_nonzero_lt (a : Nat) {b : Nat} (h : b ≠ 0) : a % b < b := sorry\n\nlemma dvd_self (n : Nat) : n ∣ n := sorry\nOne of the most important properties of gcd a b is that it divides both a and b. We prove it by strong induction on b.\ntheorem gcd_dvd : ∀ (b a : Nat), (gcd a b) ∣ a ∧ (gcd a b) ∣ b := by\n by_strong_induc\n fix b : Nat\n assume ih : ∀ b_1 < b, ∀ (a : Nat), (gcd a b_1) ∣ a ∧ (gcd a b_1) ∣ b_1\n fix a : Nat\n by_cases h1 : b = 0\n · -- Case 1. h1 : b = 0\n rewrite [h1, gcd_base] --Goal: a ∣ a ∧ a ∣ 0\n apply And.intro (dvd_self a)\n define\n apply Exists.intro 0\n rfl\n done\n · -- Case 2. h1 : b ≠ 0\n rewrite [gcd_nonzero a h1]\n --Goal : gcd b (a % b) ∣ a ∧ gcd b (a % b) ∣ b\n have h2 : a % b < b := mod_nonzero_lt a h1\n have h3 : (gcd b (a % b)) ∣ b ∧ (gcd b (a % b)) ∣ (a % b) :=\n ih (a % b) h2 b\n apply And.intro _ h3.left\n show (gcd b (a % b)) ∣ a from dvd_a_of_dvd_b_mod h3.left h3.right\n done\n done\nYou may wonder why we didn’t start the proof like this:\ntheorem gcd_dvd : ∀ (a b : Nat), (gcd a b) ∣ a ∧ (gcd a b) ∣ b := by\n fix a : Nat\n by_strong_induc\n fix b : Nat\n assume ih : ∀ b_1 < b, (gcd a b_1) ∣ a ∧ (gcd a b_1) ∣ b_1\nIn fact, this approach wouldn’t have worked. It is an interesting exercise to try to complete this version of the proof and see why it fails.\nAnother interesting question is why we asserted both (gcd a b) ∣ a and (gcd a b) ∣ b in the same theorem. Wouldn’t it have been easier to give separate proofs of the statements ∀ (b a : Nat), (gcd a b) ∣ a and ∀ (b a : Nat), (gcd a b) ∣ b? Again, you might find it enlightening to see why that wouldn’t have worked. However, now that we have proven both divisibility statements, we can state them as separate theorems:\ntheorem gcd_dvd_left (a b : Nat) : (gcd a b) ∣ a := (gcd_dvd b a).left\n\ntheorem gcd_dvd_right (a b : Nat) : (gcd a b) ∣ b := (gcd_dvd b a).right\nNext we turn to Theorem 7.1.4 in HTPI, which says that there are integers \\(s\\) and \\(t\\) such that \\(\\text{gcd}(a, b) = s a + t b\\). (We say that \\(\\text{gcd}(a, b)\\) can be written as a linear combination of \\(a\\) and \\(b\\).) In HTPI, this is proven by using an extended version of the Euclidean algorithm to compute the coefficients \\(s\\) and \\(t\\). Here we will use a different recursive procedure to compute \\(s\\) and \\(t\\). If \\(b = 0\\), then \\(\\text{gcd}(a, b) = a = 1 \\cdot a + 0 \\cdot b\\), so we can use the values \\(s = 1\\) and \\(t = 0\\). Otherwise, let \\(q\\) and \\(r\\) be the quotient and remainder when \\(a\\) is divided by \\(b\\). Then \\(a = bq + r\\) and \\(\\text{gcd}(a, b) = \\text{gcd}(b, r)\\). Now suppose that we have already computed integers \\(s'\\) and \\(t'\\) such that \\[\n\\text{gcd}(b, r) = s' b + t' r.\n\\] Then \\[\\begin{align*}\n\\text{gcd}(a, b) &= \\text{gcd}(b, r) = s' b + t' r\\\\\n&= s' b + t' (a - bq) = t' a + (s' - t'q)b.\n\\end{align*}\\] Thus, to write \\(\\text{gcd}(a, b) = s a + t b\\) we can use the values \\[\\begin{equation}\\tag{$*$}\ns = t', \\qquad t = s' - t'q.\n\\end{equation}\\]\nWe will use these equations as the basis for recursive definitions of Lean functions gcd_c1 and gcd_c2 such that the required coefficients can be obtained from the formulas s = gcd_c1 a b and t = gcd_c2 a b. Notice that s and t could be negative, so they must have type Int, not Nat. As a result, in definitions and theorems involving gcd_c1 and gcd_c2 we will sometimes have to deal with coercion of natural numbers to integers.\nThe functions gcd_c1 and gcd_c2 will be mutually recursive; in other words, each will be defined not only in terms of itself but also in terms of the other. Fortunately, Lean allows for such mutual recursion. Here are the definitions we will use.\nmutual\n def gcd_c1 (a b : Nat) : Int :=\n match b with\n | 0 => 1\n | n + 1 =>\n have : a % (n + 1) < n + 1 := mod_succ_lt a n\n gcd_c2 (n + 1) (a % (n + 1))\n --Corresponds to s = t'\n termination_by b\n\n def gcd_c2 (a b : Nat) : Int :=\n match b with\n | 0 => 0\n | n + 1 =>\n have : a % (n + 1) < n + 1 := mod_succ_lt a n\n gcd_c1 (n + 1) (a % (n + 1)) -\n (gcd_c2 (n + 1) (a % (n + 1))) * ↑(a / (n + 1))\n --Corresponds to t = s' - t'q\n termination_by b\nend\nNotice that in the definition of gcd_c2, the quotient a / (n + 1) is computed using natural-number division, but it is then coerced to be an integer so that it can be multiplied by the integer gcd_c2 (n + 1) (a % (n + 1)).\nOur main theorem about these functions is that they give the coefficients needed to write gcd a b as a linear combination of a and b. As usual, stating a few lemmas first helps with the proof. We leave the proofs of two of them as exercises for you (hint: imitate the proof of gcd_nonzero above).\nlemma gcd_c1_base (a : Nat) : gcd_c1 a 0 = 1 := by rfl\n\nlemma gcd_c1_nonzero (a : Nat) {b : Nat} (h : b ≠ 0) :\n gcd_c1 a b = gcd_c2 b (a % b) := sorry\n\nlemma gcd_c2_base (a : Nat) : gcd_c2 a 0 = 0 := by rfl\n\nlemma gcd_c2_nonzero (a : Nat) {b : Nat} (h : b ≠ 0) :\n gcd_c2 a b = gcd_c1 b (a % b) - (gcd_c2 b (a % b)) * ↑(a / b) := sorry\nWith that preparation, we are ready to prove that gcd_c1 a b and gcd_c2 a b give coefficients for expressing gcd a b as a linear combination of a and b. Of course, the theorem is proven by strong induction. For clarity, we’ll write the coercions explicitly in this proof. We’ll make a few comments after the proof that may help you follow the details.\ntheorem gcd_lin_comb : ∀ (b a : Nat),\n (gcd_c1 a b) * ↑a + (gcd_c2 a b) * ↑b = ↑(gcd a b) := by\n by_strong_induc\n fix b : Nat\n assume ih : ∀ b_1 < b, ∀ (a : Nat),\n (gcd_c1 a b_1) * ↑a + (gcd_c2 a b_1) * ↑b_1 = ↑(gcd a b_1)\n fix a : Nat\n by_cases h1 : b = 0\n · -- Case 1. h1 : b = 0\n rewrite [h1, gcd_c1_base, gcd_c2_base, gcd_base]\n --Goal : 1 * ↑a + 0 * ↑0 = ↑a\n ring\n done\n · -- Case 2. h1 : b ≠ 0\n rewrite [gcd_c1_nonzero a h1, gcd_c2_nonzero a h1, gcd_nonzero a h1]\n --Goal : gcd_c2 b (a % b) * ↑a +\n -- (gcd_c1 b (a % b) - gcd_c2 b (a % b) * ↑(a / b)) * ↑b =\n -- ↑(gcd b (a % b))\n set r : Nat := a % b\n set q : Nat := a / b\n set s : Int := gcd_c1 b r\n set t : Int := gcd_c2 b r\n --Goal : t * ↑a + (s - t * ↑q) * ↑b = ↑(gcd b r)\n have h2 : r < b := mod_nonzero_lt a h1\n have h3 : s * ↑b + t * ↑r = ↑(gcd b r) := ih r h2 b\n have h4 : b * q + r = a := Nat.div_add_mod a b\n rewrite [←h3, ←h4]\n rewrite [Nat.cast_add, Nat.cast_mul]\n --Goal : t * (↑b * ↑q + ↑r) + (s - t * ↑q) * ↑b = s * ↑b + t * ↑r\n ring\n done\n done\nIn case 2, we have introduced the variables r, q, s, and t to simplify the notation. Notice that the set tactic automatically plugs in this notation in the goal. After the step rewrite [←h3, ←h4], the goal contains the expression ↑(b * q + r). You can use the #check command to see why Nat.cast_add and Nat.cast_mul convert this expression to first ↑(b * q) + ↑r and then ↑b * ↑q + ↑r. Without those steps, the ring tactic would not have been able to complete the proof.\nWe can try out the functions gcd_c1 and gcd_c2 as follows:\n++#eval:: gcd_c1 672 161 --Answer: 6\n++#eval:: gcd_c2 672 161 --Answer: -25\n --Note 6 * 672 - 25 * 161 = 4032 - 4025 = 7 = gcd 672 161\nFinally, we turn to Theorem 7.1.6 in HTPI, which expresses one of the senses in which gcd a b is the greatest common divisor of a and b. Our proof follows the strategy of the proof in HTPI, with one additional step: we begin by using the theorem Int.natCast_dvd_natCast to change the goal from d ∣ gcd a b to ↑d ∣ ↑(gcd a b) (where the coercions are from Nat to Int), so that the rest of the proof can work with integer algebra rather than natural-number algebra.\ntheorem Theorem_7_1_6 {d a b : Nat} (h1 : d ∣ a) (h2 : d ∣ b) :\n d ∣ gcd a b := by\n rewrite [←Int.natCast_dvd_natCast] --Goal : ↑d ∣ ↑(gcd a b)\n set s : Int := gcd_c1 a b\n set t : Int := gcd_c2 a b\n have h3 : s * ↑a + t * ↑b = ↑(gcd a b) := gcd_lin_comb b a\n rewrite [←h3] --Goal : ↑d ∣ s * ↑a + t * ↑b\n obtain (j : Nat) (h4 : a = d * j) from h1\n obtain (k : Nat) (h5 : b = d * k) from h2\n rewrite [h4, h5, Nat.cast_mul, Nat.cast_mul]\n --Goal : ↑d ∣ s * (↑d * ↑j) + t * (↑d * ↑k)\n define\n apply Exists.intro (s * ↑j + t * ↑k)\n ring\n done\nWe will ask you in the exercises to prove that, among the common divisors of a and b, gcd a b is the greatest with respect to the usual ordering of the natural numbers (as long as gcd a b ≠ 0).\n\nExercises\n\ntheorem dvd_a_of_dvd_b_mod {a b d : Nat}\n (h1 : d ∣ b) (h2 : d ∣ (a % b)) : d ∣ a := sorry\n\n\nlemma gcd_comm_lt {a b : Nat} (h : a < b) : gcd a b = gcd b a := sorry\n\ntheorem gcd_comm (a b : Nat) : gcd a b = gcd b a := sorry\n\n\ntheorem Exercise_7_1_5 (a b : Nat) (n : Int) :\n (∃ (s t : Int), s * a + t * b = n) ↔ (↑(gcd a b) : Int) ∣ n := sorry\n\n\ntheorem Exercise_7_1_6 (a b c : Nat) :\n gcd a b = gcd (a + b * c) b := sorry\n\n\ntheorem gcd_is_nonzero {a b : Nat} (h : a ≠ 0 ∨ b ≠ 0) :\n gcd a b ≠ 0 := sorry\n\n\ntheorem gcd_greatest {a b d : Nat} (h1 : gcd a b ≠ 0)\n (h2 : d ∣ a) (h3 : d ∣ b) : d ≤ gcd a b := sorry\n\n\nlemma Lemma_7_1_10a {a b : Nat}\n (n : Nat) (h : a ∣ b) : (n * a) ∣ (n * b) := sorry\n\nlemma Lemma_7_1_10b {a b n : Nat}\n (h1 : n ≠ 0) (h2 : (n * a) ∣ (n * b)) : a ∣ b := sorry\n\nlemma Lemma_7_1_10c {a b : Nat}\n (h1 : a ∣ b) (h2 : b ∣ a) : a = b := sorry\n\ntheorem Exercise_7_1_10 (a b n : Nat) :\n gcd (n * a) (n * b) = n * gcd a b := sorry" }, { "objectID": "Chap7.html#prime-factorization", "href": "Chap7.html#prime-factorization", "title": "7  Number Theory", "section": "7.2. Prime Factorization", - "text": "7.2. Prime Factorization\nA natural number \\(n\\) is said to be prime if it is at least 2 and it cannot be written as a product of two smaller natural numbers. Of course, we can write this definition in Lean.\ndef prime (n : Nat) : Prop :=\n 2 ≤ n ∧ ¬∃ (a b : Nat), a * b = n ∧ a < n ∧ b < n\nThe main goal of Section 7.2 of HTPI is to prove that every positive integer has a unique prime factorization; that is, it can be written in a unique way as the product of a nondecreasing list of prime numbers. To get started on this goal, we first prove that every number greater than or equal to 2 has a prime factor. We leave one lemma as an exercise for you (it is a natural-number version of Theorem_3_3_7).\ndef prime_factor (p n : Nat) : Prop := prime p ∧ p ∣ n\n\nlemma dvd_trans {a b c : Nat} (h1 : a ∣ b) (h2 : b ∣ c) : a ∣ c := sorry\n\nlemma exists_prime_factor : ∀ (n : Nat), 2 ≤ n →\n ∃ (p : Nat), prime_factor p n := by\n by_strong_induc\n fix n : Nat\n assume ih : ∀ n_1 < n, 2 ≤ n_1 → ∃ (p : Nat), prime_factor p n_1\n assume h1 : 2 ≤ n\n by_cases h2 : prime n\n · -- Case 1. h2 : prime n\n apply Exists.intro n\n define --Goal : prime n ∧ n ∣ n\n show prime n ∧ n ∣ n from And.intro h2 (dvd_self n)\n done\n · -- Case 2. h2 : ¬prime n\n define at h2\n --h2 : ¬(2 ≤ n ∧ ¬∃ (a b : Nat), a * b = n ∧ a < n ∧ b < n)\n demorgan at h2\n disj_syll h2 h1\n obtain (a : Nat) (h3 : ∃ (b : Nat), a * b = n ∧ a < n ∧ b < n) from h2\n obtain (b : Nat) (h4 : a * b = n ∧ a < n ∧ b < n) from h3\n have h5 : 2 ≤ a := by\n by_contra h6\n have h7 : a ≤ 1 := by linarith\n have h8 : n ≤ b :=\n calc n\n _ = a * b := h4.left.symm\n _ ≤ 1 * b := by rel [h7]\n _ = b := by ring\n linarith --n ≤ b contradicts b < n\n done\n have h6 : ∃ (p : Nat), prime_factor p a := ih a h4.right.left h5\n obtain (p : Nat) (h7 : prime_factor p a) from h6\n apply Exists.intro p\n define --Goal : prime p ∧ p ∣ n\n define at h7 --h7 : prime p ∧ p ∣ a\n apply And.intro h7.left\n have h8 : a ∣ n := by\n apply Exists.intro b\n show n = a * b from (h4.left).symm\n done\n show p ∣ n from dvd_trans h7.right h8\n done\n done\nOf course, by the well ordering principle, an immediate consequence of this lemma is that every number greater than or equal to 2 has a smallest prime factor.\nlemma exists_least_prime_factor {n : Nat} (h : 2 ≤ n) :\n ∃ (p : Nat), prime_factor p n ∧\n ∀ (q : Nat), prime_factor q n → p ≤ q := by\n set S : Set Nat := {p : Nat | prime_factor p n}\n have h2 : ∃ (p : Nat), p ∈ S := exists_prime_factor n h\n show ∃ (p : Nat), prime_factor p n ∧\n ∀ (q : Nat), prime_factor q n → p ≤ q from well_ord_princ S h2\n done\nTo talk about prime factorizations of positive integers, we’ll need to introduce a new type. If U is any type, then List U is the type of lists of objects of type U. Such a list is written in square brackets, with the entries separated by commas. For example, [3, 7, 1] has type List Nat. The notation [] denotes the empty list, and if a has type U and l has type List U, then a :: l denotes the list consisting of a followed by the entries of l. The empty list is sometimes called the nil list, and the operation of constructing a list a :: l from a and l is called cons (short for construct). Every list can be constructed by applying the cons operation repeatedly, starting with the nil list. For example,\n\n[3, 7, 1] = 3 :: [7, 1] = 3 :: (7 :: [1]) = 3 :: (7 :: (1 :: [])).\n\nIf l has type List U and a has type U, then a ∈ l means that a is one of the entries in the list l. For example, 7 ∈ [3, 7, 1]. Lean knows several theorems about this notation:\n\n@List.not_mem_nil : ∀ {α : Type u_1} (a : α),\n a ∉ []\n\n@List.mem_cons : ∀ {α : Type u_1} {a b : α} {l : List α},\n a ∈ b :: l ↔ a = b ∨ a ∈ l\n\n@List.mem_cons_self : ∀ {α : Type u_1} (a : α) (l : List α),\n a ∈ a :: l\n\n@List.mem_cons_of_mem : ∀ {α : Type u_1} (y : α) {a : α} {l : List α},\n a ∈ l → a ∈ y :: l\n\nThe first two theorems give the conditions under which something is a member of the nil list or a list constructed by cons, and the last two are easy consequences of the second.\nTo define prime factorizations, we must define several concepts first. Some of these concepts are most easily defined recursively.\ndef all_prime (l : List Nat) : Prop := ∀ p ∈ l, prime p\n\ndef nondec (l : List Nat) : Prop :=\n match l with\n | [] => True --Of course, True is a proposition that is always true\n | n :: L => (∀ m ∈ L, n ≤ m) ∧ nondec L\n\ndef nondec_prime_list (l : List Nat) : Prop := all_prime l ∧ nondec l\n\ndef prod (l : List Nat) : Nat :=\n match l with\n | [] => 1\n | n :: L => n * (prod L)\n\ndef prime_factorization (n : Nat) (l : List Nat) : Prop :=\n nondec_prime_list l ∧ prod l = n\nAccording to these definitions, all_prime l means that every member of the list l is prime, nondec l means that every member of l is less than or equal to all later members, prod l is the product of all members of l, and prime_factorization n l means that l is a nondecreasing list of prime numbers whose product is n. It will be convenient to spell out some consequences of these definitions in several lemmas:\nlemma all_prime_nil : all_prime [] := by\n define --Goal : ∀ p ∈ [], prime p\n fix p : Nat\n contrapos --Goal : ¬prime p → p ∉ []\n assume h1 : ¬prime p\n show p ∉ [] from List.not_mem_nil p\n done\n\nlemma all_prime_cons (n : Nat) (L : List Nat) :\n all_prime (n :: L) ↔ prime n ∧ all_prime L := by\n apply Iff.intro\n · -- (→)\n assume h1 : all_prime (n :: L) --Goal : prime n ∧ all_prime L\n define at h1 --h1 : ∀ p ∈ n :: L, prime p\n apply And.intro (h1 n (List.mem_cons_self n L))\n define --Goal : ∀ p ∈ L, prime p\n fix p : Nat\n assume h2 : p ∈ L\n show prime p from h1 p (List.mem_cons_of_mem n h2)\n done\n · -- (←)\n assume h1 : prime n ∧ all_prime L --Goal : all_prime (n :: l)\n define : all_prime L at h1\n define\n fix p : Nat\n assume h2 : p ∈ n :: L\n rewrite [List.mem_cons] at h2 --h2 : p = n ∨ p ∈ L\n by_cases on h2\n · -- Case 1. h2 : p = n\n rewrite [h2]\n show prime n from h1.left\n done\n · -- Case 2. h2 : p ∈ L\n show prime p from h1.right p h2\n done\n done\n done\n\nlemma nondec_nil : nondec [] := by\n define --Goal : True\n trivial --trivial proves some obviously true statements, such as True\n done\n\nlemma nondec_cons (n : Nat) (L : List Nat) :\n nondec (n :: L) ↔ (∀ m ∈ L, n ≤ m) ∧ nondec L := by rfl\n\nlemma prod_nil : prod [] = 1 := by rfl\n\nlemma prod_cons : prod (n :: L) = n * (prod L) := by rfl\nBefore we can prove the existence of prime factorizations, we will need one more fact: every member of a list of natural numbers divides the product of the list. The proof will be by induction on the length of the list, so we will need to know how to work with lengths of lists in Lean. If l is a list, then the length of l is List.length l, which can also be written more briefly as l.length. We’ll need a few more theorems about lists:\n\n@List.length_eq_zero : ∀ {α : Type u_1} {l : List α},\n List.length l = 0 ↔ l = []\n\n@List.length_cons : ∀ {α : Type u_1} (a : α) (as : List α),\n List.length (a :: as) = Nat.succ (List.length as)\n\n@List.exists_cons_of_ne_nil : ∀ {α : Type u_1} {l : List α},\n l ≠ [] → ∃ (b : α), ∃ (L : List α), l = b :: L\n\nAnd we’ll need one more lemma, which follows from the three theorems above; we leave the proof as an exercise for you:\nlemma exists_cons_of_length_eq_succ {A : Type}\n {l : List A} {n : Nat} (h : l.length = n + 1) :\n ∃ (a : A) (L : List A), l = a :: L ∧ L.length = n := sorry\nWe can now prove that every member of a list of natural numbers divides the product of the list. After proving it by induction on the length of the list, we restate the lemma in a more convenient form.\nlemma list_elt_dvd_prod_by_length (a : Nat) : ∀ (n : Nat),\n ∀ (l : List Nat), l.length = n → a ∈ l → a ∣ prod l := by\n by_induc\n · --Base Case\n fix l : List Nat\n assume h1 : l.length = 0\n rewrite [List.length_eq_zero] at h1 --h1 : l = []\n rewrite [h1] --Goal : a ∈ [] → a ∣ prod []\n contrapos\n assume h2 : ¬a ∣ prod []\n show a ∉ [] from List.not_mem_nil a\n done\n · -- Induction Step\n fix n : Nat\n assume ih : ∀ (l : List Nat), List.length l = n → a ∈ l → a ∣ prod l\n fix l : List Nat\n assume h1 : l.length = n + 1 --Goal : a ∈ l → a ∣ prod l\n obtain (b : Nat) (h2 : ∃ (L : List Nat),\n l = b :: L ∧ L.length = n) from exists_cons_of_length_eq_succ h1\n obtain (L : List Nat) (h3 : l = b :: L ∧ L.length = n) from h2\n have h4 : a ∈ L → a ∣ prod L := ih L h3.right\n assume h5 : a ∈ l\n rewrite [h3.left, prod_cons] --Goal : a ∣ b * prod L\n rewrite [h3.left, List.mem_cons] at h5 --h5 : a = b ∨ a ∈ L\n by_cases on h5\n · -- Case 1. h5 : a = b\n apply Exists.intro (prod L)\n rewrite [h5]\n rfl\n done\n · -- Case 2. h5 : a ∈ L\n have h6 : a ∣ prod L := h4 h5\n have h7 : prod L ∣ b * prod L := by\n apply Exists.intro b\n ring\n done\n show a ∣ b * prod L from dvd_trans h6 h7\n done\n done\n done\n\nlemma list_elt_dvd_prod {a : Nat} {l : List Nat}\n (h : a ∈ l) : a ∣ prod l := by\n set n : Nat := l.length\n have h1 : l.length = n := by rfl\n show a ∣ prod l from list_elt_dvd_prod_by_length a n l h1 h\n done\nThe proof that every positive integer has a prime factorization is now long but straightforward.\nlemma exists_prime_factorization : ∀ (n : Nat), n ≥ 1 →\n ∃ (l : List Nat), prime_factorization n l := by\n by_strong_induc\n fix n : Nat\n assume ih : ∀ n_1 < n, n_1 ≥ 1 →\n ∃ (l : List Nat), prime_factorization n_1 l\n assume h1 : n ≥ 1\n by_cases h2 : n = 1\n · -- Case 1. h2 : n = 1\n apply Exists.intro []\n define\n apply And.intro\n · -- Proof of nondec_prime_list []\n define\n show all_prime [] ∧ nondec [] from\n And.intro all_prime_nil nondec_nil\n done\n · -- Proof of prod [] = n\n rewrite [prod_nil, h2]\n rfl\n done\n done\n · -- Case 2. h2 : n ≠ 1\n have h3 : n ≥ 2 := lt_of_le_of_ne' h1 h2\n obtain (p : Nat) (h4 : prime_factor p n ∧ ∀ (q : Nat),\n prime_factor q n → p ≤ q) from exists_least_prime_factor h3\n have p_prime_factor : prime_factor p n := h4.left\n define at p_prime_factor\n have p_prime : prime p := p_prime_factor.left\n have p_dvd_n : p ∣ n := p_prime_factor.right\n have p_least : ∀ (q : Nat), prime_factor q n → p ≤ q := h4.right\n obtain (m : Nat) (n_eq_pm : n = p * m) from p_dvd_n\n have h5 : m ≠ 0 := by\n contradict h1 with h6\n have h7 : n = 0 :=\n calc n\n _ = p * m := n_eq_pm\n _ = p * 0 := by rw [h6]\n _ = 0 := by ring\n rewrite [h7]\n decide\n done\n have m_pos : 0 < m := Nat.pos_of_ne_zero h5\n have m_lt_n : m < n := by\n define at p_prime\n show m < n from\n calc m\n _ < m + m := by linarith\n _ = 2 * m := by ring\n _ ≤ p * m := by rel [p_prime.left]\n _ = n := n_eq_pm.symm\n done\n obtain (L : List Nat) (h6 : prime_factorization m L)\n from ih m m_lt_n m_pos\n define at h6\n have ndpl_L : nondec_prime_list L := h6.left\n define at ndpl_L\n apply Exists.intro (p :: L)\n define\n apply And.intro\n · -- Proof of nondec_prime_list (p :: L)\n define\n apply And.intro\n · -- Proof of all_prime (p :: L)\n rewrite [all_prime_cons]\n show prime p ∧ all_prime L from And.intro p_prime ndpl_L.left\n done\n · -- Proof of nondec (p :: L)\n rewrite [nondec_cons]\n apply And.intro _ ndpl_L.right\n fix q : Nat\n assume q_in_L : q ∈ L\n have h7 : q ∣ prod L := list_elt_dvd_prod q_in_L\n rewrite [h6.right] at h7 --h7 : q ∣ m\n have h8 : m ∣ n := by\n apply Exists.intro p\n rewrite [n_eq_pm]\n ring\n done\n have q_dvd_n : q ∣ n := dvd_trans h7 h8\n have ap_L : all_prime L := ndpl_L.left\n define at ap_L\n have q_prime_factor : prime_factor q n :=\n And.intro (ap_L q q_in_L) q_dvd_n\n show p ≤ q from p_least q q_prime_factor\n done\n done\n · -- Proof of prod (p :: L) = n\n rewrite [prod_cons, h6.right, n_eq_pm]\n rfl\n done\n done\n done\nWe now turn to the proof that the prime factorization of a positive integer is unique. In preparation for that proof, HTPI defines two numbers to be relatively prime if their greatest common divisor is 1, and then it uses that concept to prove two theorems, 7.2.2 and 7.2.3. Here are similar proofs of those theorems in Lean, with the proof of one lemma left as an exercise. In the proof of Theorem 7.2.2, we begin, as we did in the proof of Theorem 7.1.6, by converting the goal from natural numbers to integers so that we can use integer algebra.\ndef rel_prime (a b : Nat) : Prop := gcd a b = 1\n\ntheorem Theorem_7_2_2 {a b c : Nat}\n (h1 : c ∣ a * b) (h2 : rel_prime a c) : c ∣ b := by\n rewrite [←Int.coe_nat_dvd] --Goal : ↑c ∣ ↑b\n define at h1; define at h2; define\n obtain (j : Nat) (h3 : a * b = c * j) from h1\n set s : Int := gcd_c1 a c\n set t : Int := gcd_c2 a c\n have h4 : s * ↑a + t * ↑c = ↑(gcd a c) := gcd_lin_comb c a\n rewrite [h2, Nat.cast_one] at h4 --h4 : s * ↑a + t * ↑c = (1 : Int)\n apply Exists.intro (s * ↑j + t * ↑b)\n show ↑b = ↑c * (s * ↑j + t * ↑b) from\n calc ↑b\n _ = (1 : Int) * ↑b := (one_mul _).symm\n _ = (s * ↑a + t * ↑c) * ↑b := by rw [h4]\n _ = s * (↑a * ↑b) + t * ↑c * ↑b := by ring\n _ = s * (↑c * ↑j) + t * ↑c * ↑b := by\n rw [←Nat.cast_mul a b, h3, Nat.cast_mul c j]\n _ = ↑c * (s * ↑j + t * ↑b) := by ring\n done\n\nlemma dvd_prime {a p : Nat}\n (h1 : prime p) (h2 : a ∣ p) : a = 1 ∨ a = p := sorry\n\nlemma rel_prime_of_prime_not_dvd {a p : Nat}\n (h1 : prime p) (h2 : ¬p ∣ a) : rel_prime a p := by\n have h3 : gcd a p ∣ a := gcd_dvd_left a p\n have h4 : gcd a p ∣ p := gcd_dvd_right a p\n have h5 : gcd a p = 1 ∨ gcd a p = p := dvd_prime h1 h4\n have h6 : gcd a p ≠ p := by\n contradict h2 with h6\n rewrite [h6] at h3\n show p ∣ a from h3\n done\n disj_syll h5 h6\n show rel_prime a p from h5\n done\n\ntheorem Theorem_7_2_3 {a b p : Nat}\n (h1 : prime p) (h2 : p ∣ a * b) : p ∣ a ∨ p ∣ b := by\n or_right with h3\n have h4 : rel_prime a p := rel_prime_of_prime_not_dvd h1 h3\n show p ∣ b from Theorem_7_2_2 h2 h4\n done\nTheorem 7.2.4 in HTPI extends Theorem 7.2.3 to show that if a prime number divides the product of a list of natural numbers, then it divides one of the numbers in the list. (Theorem 7.2.3 is the case of a list of length two.) The proof in HTPI is by induction on the length of the list, and we could use that method to prove the theorem in Lean. But look back at our proof of the lemma list_elt_dvd_prod_by_length, which also used induction on the length of a list. In the base case, we ended up proving that the nil list has the property stated in the lemma, and in the induction step we proved that if a list L has the property, then so does any list of the form b :: L. We could think of this as a kind of “induction on lists.” As we observed earlier, every list can be constructed by starting with the nil list and applying cons finitely many times. It follows that if the nil list has some property, and applying the cons operation to a list with the property produces another list with the property, then all lists have the property. (In fact, a similar principle was at work in our recursive definitions of nondec l and prod l.)\nLean has a theorem called List.rec that can be used to justify induction on lists. This is a little more convenient than induction on the length of a list, so we’ll use it to prove Theorem 7.2.4. The proof uses two lemmas, whose proofs we leave as exercises for you.\nlemma eq_one_of_dvd_one {n : Nat} (h : n ∣ 1) : n = 1 := sorry\n\nlemma prime_not_one {p : Nat} (h : prime p) : p ≠ 1 := sorry\n\ntheorem Theorem_7_2_4 {p : Nat} (h1 : prime p) :\n ∀ (l : List Nat), p ∣ prod l → ∃ a ∈ l, p ∣ a := by\n apply List.rec\n · -- Base Case. Goal : p ∣ prod [] → ∃ a ∈ [], p ∣ a\n rewrite [prod_nil]\n assume h2 : p ∣ 1\n show ∃ a ∈ [], p ∣ a from\n absurd (eq_one_of_dvd_one h2) (prime_not_one h1)\n done\n · -- Induction Step\n fix b : Nat\n fix L : List Nat\n assume ih : p ∣ prod L → ∃ a ∈ L, p ∣ a\n --Goal : p ∣ prod (b :: L) → ∃ a ∈ b :: L, p ∣ a\n assume h2 : p ∣ prod (b :: L)\n rewrite [prod_cons] at h2\n have h3 : p ∣ b ∨ p ∣ prod L := Theorem_7_2_3 h1 h2\n by_cases on h3\n · -- Case 1. h3 : p ∣ b\n apply Exists.intro b\n show b ∈ b :: L ∧ p ∣ b from\n And.intro (List.mem_cons_self b L) h3\n done\n · -- Case 2. h3 : p ∣ prod L\n obtain (a : Nat) (h4 : a ∈ L ∧ p ∣ a) from ih h3\n apply Exists.intro a\n show a ∈ b :: L ∧ p ∣ a from\n And.intro (List.mem_cons_of_mem b h4.left) h4.right\n done\n done\n done\nIn Theorem 7.2.4, if all members of the list l are prime, then we can conclude not merely that p divides some member of l, but that p is one of the members.\nlemma prime_in_list {p : Nat} {l : List Nat}\n (h1 : prime p) (h2 : all_prime l) (h3 : p ∣ prod l) : p ∈ l := by\n obtain (a : Nat) (h4 : a ∈ l ∧ p ∣ a) from Theorem_7_2_4 h1 l h3\n define at h2\n have h5 : prime a := h2 a h4.left\n have h6 : p = 1 ∨ p = a := dvd_prime h5 h4.right\n disj_syll h6 (prime_not_one h1)\n rewrite [h6]\n show a ∈ l from h4.left\n done\nThe uniqueness of prime factorizations follows from Theorem 7.2.5 of HTPI, which says that if two nondecreasing lists of prime numbers have the same product, then the two lists must be the same. In HTPI, a key step in the proof of Theorem 7.2.5 is to show that if two nondecreasing lists of prime numbers have the same product, then the last entry of one list is less than or equal to the last entry of the other. In Lean, because of the way the cons operation works, it is easier to work with the first entries of the lists.\nlemma first_le_first {p q : Nat} {l m : List Nat}\n (h1 : nondec_prime_list (p :: l)) (h2 : nondec_prime_list (q :: m))\n (h3 : prod (p :: l) = prod (q :: m)) : p ≤ q := by\n define at h1; define at h2\n have h4 : q ∣ prod (p :: l) := by\n define\n apply Exists.intro (prod m)\n rewrite [←prod_cons]\n show prod (p :: l) = prod (q :: m) from h3\n done\n have h5 : all_prime (q :: m) := h2.left\n rewrite [all_prime_cons] at h5\n have h6 : q ∈ p :: l := prime_in_list h5.left h1.left h4\n have h7 : nondec (p :: l) := h1.right\n rewrite [nondec_cons] at h7\n rewrite [List.mem_cons] at h6\n by_cases on h6\n · -- Case 1. h6 : q = p\n linarith\n done\n · -- Case 2. h6 : q ∈ l\n have h8 : ∀ m ∈ l, p ≤ m := h7.left\n show p ≤ q from h8 q h6\n done\n done\nThe proof of Theorem 7.2.5 is another proof by induction on lists. It uses a few more lemmas whose proofs we leave as exercises.\nlemma nondec_prime_list_tail {p : Nat} {l : List Nat}\n (h : nondec_prime_list (p :: l)) : nondec_prime_list l := sorry\n\nlemma cons_prod_not_one {p : Nat} {l : List Nat}\n (h : nondec_prime_list (p :: l)) : prod (p :: l) ≠ 1 := sorry\n\nlemma list_nil_iff_prod_one {l : List Nat} (h : nondec_prime_list l) :\n l = [] ↔ prod l = 1 := sorry\n\nlemma prime_pos {p : Nat} (h : prime p) : p > 0 := sorry\n\ntheorem Theorem_7_2_5 : ∀ (l1 l2 : List Nat),\n nondec_prime_list l1 → nondec_prime_list l2 →\n prod l1 = prod l2 → l1 = l2 := by\n apply List.rec\n · -- Base Case. Goal : ∀ (l2 : List Nat), nondec_prime_list [] →\n -- nondec_prime_list l2 → prod [] = prod l2 → [] = l2\n fix l2 : List Nat\n assume h1 : nondec_prime_list []\n assume h2 : nondec_prime_list l2\n assume h3 : prod [] = prod l2\n rewrite [prod_nil, eq_comm, ←list_nil_iff_prod_one h2] at h3\n show [] = l2 from h3.symm\n done\n · -- Induction Step\n fix p : Nat\n fix L1 : List Nat\n assume ih : ∀ (L2 : List Nat), nondec_prime_list L1 →\n nondec_prime_list L2 → prod L1 = prod L2 → L1 = L2\n -- Goal : ∀ (l2 : List Nat), nondec_prime_list (p :: L1) →\n -- nondec_prime_list l2 → prod (p :: L1) = prod l2 → p :: L1 = l2\n fix l2 : List Nat\n assume h1 : nondec_prime_list (p :: L1)\n assume h2 : nondec_prime_list l2\n assume h3 : prod (p :: L1) = prod l2\n have h4 : ¬prod (p :: L1) = 1 := cons_prod_not_one h1\n rewrite [h3, ←list_nil_iff_prod_one h2] at h4\n obtain (q : Nat) (h5 : ∃ (L : List Nat), l2 = q :: L) from\n List.exists_cons_of_ne_nil h4\n obtain (L2 : List Nat) (h6 : l2 = q :: L2) from h5\n rewrite [h6] at h2 --h2 : nondec_prime_list (q :: L2)\n rewrite [h6] at h3 --h3 : prod (p :: L1) = prod (q :: L2)\n have h7 : p ≤ q := first_le_first h1 h2 h3\n have h8 : q ≤ p := first_le_first h2 h1 h3.symm\n have h9 : p = q := by linarith\n rewrite [h9, prod_cons, prod_cons] at h3\n --h3 : q * prod L1 = q * prod L2\n have h10 : nondec_prime_list L1 := nondec_prime_list_tail h1\n have h11 : nondec_prime_list L2 := nondec_prime_list_tail h2\n define at h2\n have h12 : all_prime (q :: L2) := h2.left\n rewrite [all_prime_cons] at h12\n have h13 : q > 0 := prime_pos h12.left\n have h14 : prod L1 = prod L2 := Nat.eq_of_mul_eq_mul_left h13 h3\n have h15 : L1 = L2 := ih L2 h10 h11 h14\n rewrite [h6, h9, h15]\n rfl\n done\n done\nPutting it all together, we can finally prove the fundamental theorem of arithmetic, which is stated as Theorem 7.2.6 in HTPI:\ntheorem fund_thm_arith (n : Nat) (h : n ≥ 1) :\n ∃! (l : List Nat), prime_factorization n l := by\n exists_unique\n · -- Existence\n show ∃ (l : List Nat), prime_factorization n l from\n exists_prime_factorization n h\n done\n · -- Uniqueness\n fix l1 : List Nat; fix l2 : List Nat\n assume h1 : prime_factorization n l1\n assume h2 : prime_factorization n l2\n define at h1; define at h2\n have h3 : prod l1 = n := h1.right\n rewrite [←h2.right] at h3\n show l1 = l2 from Theorem_7_2_5 l1 l2 h1.left h2.left h3\n done\n done\n\nExercises\n\nlemma dvd_prime {a p : Nat}\n (h1 : prime p) (h2 : a ∣ p) : a = 1 ∨ a = p := sorry\n\n\n--Hints: Start with apply List.rec.\n--You may find the theorem mul_ne_zero useful.\ntheorem prod_nonzero_nonzero : ∀ (l : List Nat),\n (∀ a ∈ l, a ≠ 0) → prod l ≠ 0 := sorry\n\n\ntheorem rel_prime_iff_no_common_factor (a b : Nat) :\n rel_prime a b ↔ ¬∃ (p : Nat), prime p ∧ p ∣ a ∧ p ∣ b := sorry\n\n\ntheorem rel_prime_symm {a b : Nat} (h : rel_prime a b) :\n rel_prime b a := sorry\n\n\nlemma in_prime_factorization_iff_prime_factor {a : Nat} {l : List Nat}\n (h1 : prime_factorization a l) (p : Nat) :\n p ∈ l ↔ prime_factor p a := sorry\n\n\ntheorem Exercise_7_2_5 {a b : Nat} {l m : List Nat}\n (h1 : prime_factorization a l) (h2 : prime_factorization b m) :\n rel_prime a b ↔ (¬∃ (p : Nat), p ∈ l ∧ p ∈ m) := sorry\n\n\ntheorem Exercise_7_2_6 (a b : Nat) :\n rel_prime a b ↔ ∃ (s t : Int), s * a + t * b = 1 := sorry\n\n\ntheorem Exercise_7_2_7 {a b a' b' : Nat}\n (h1 : rel_prime a b) (h2 : a' ∣ a) (h3 : b' ∣ b) :\n rel_prime a' b' := sorry\n\n\ntheorem Exercise_7_2_9 {a b j k : Nat}\n (h1 : gcd a b ≠ 0) (h2 : a = j * gcd a b) (h3 : b = k * gcd a b) :\n rel_prime j k := sorry\n\n\ntheorem Exercise_7_2_17a (a b c : Nat) :\n gcd a (b * c) ∣ gcd a b * gcd a c := sorry" + "text": "7.2. Prime Factorization\nA natural number \\(n\\) is said to be prime if it is at least 2 and it cannot be written as a product of two smaller natural numbers. Of course, we can write this definition in Lean.\ndef prime (n : Nat) : Prop :=\n 2 ≤ n ∧ ¬∃ (a b : Nat), a * b = n ∧ a < n ∧ b < n\nThe main goal of Section 7.2 of HTPI is to prove that every positive integer has a unique prime factorization; that is, it can be written in a unique way as the product of a nondecreasing list of prime numbers. To get started on this goal, we first prove that every number greater than or equal to 2 has a prime factor. We leave one lemma as an exercise for you (it is a natural-number version of Theorem_3_3_7).\ndef prime_factor (p n : Nat) : Prop := prime p ∧ p ∣ n\n\nlemma dvd_trans {a b c : Nat} (h1 : a ∣ b) (h2 : b ∣ c) : a ∣ c := sorry\n\nlemma exists_prime_factor : ∀ (n : Nat), 2 ≤ n →\n ∃ (p : Nat), prime_factor p n := by\n by_strong_induc\n fix n : Nat\n assume ih : ∀ n_1 < n, 2 ≤ n_1 → ∃ (p : Nat), prime_factor p n_1\n assume h1 : 2 ≤ n\n by_cases h2 : prime n\n · -- Case 1. h2 : prime n\n apply Exists.intro n\n define --Goal : prime n ∧ n ∣ n\n show prime n ∧ n ∣ n from And.intro h2 (dvd_self n)\n done\n · -- Case 2. h2 : ¬prime n\n define at h2\n --h2 : ¬(2 ≤ n ∧ ¬∃ (a b : Nat), a * b = n ∧ a < n ∧ b < n)\n demorgan at h2\n disj_syll h2 h1\n obtain (a : Nat) (h3 : ∃ (b : Nat), a * b = n ∧ a < n ∧ b < n) from h2\n obtain (b : Nat) (h4 : a * b = n ∧ a < n ∧ b < n) from h3\n have h5 : 2 ≤ a := by\n by_contra h6\n have h7 : a ≤ 1 := by linarith\n have h8 : n ≤ b :=\n calc n\n _ = a * b := h4.left.symm\n _ ≤ 1 * b := by rel [h7]\n _ = b := by ring\n linarith --n ≤ b contradicts b < n\n done\n have h6 : ∃ (p : Nat), prime_factor p a := ih a h4.right.left h5\n obtain (p : Nat) (h7 : prime_factor p a) from h6\n apply Exists.intro p\n define --Goal : prime p ∧ p ∣ n\n define at h7 --h7 : prime p ∧ p ∣ a\n apply And.intro h7.left\n have h8 : a ∣ n := by\n apply Exists.intro b\n show n = a * b from (h4.left).symm\n done\n show p ∣ n from dvd_trans h7.right h8\n done\n done\nOf course, by the well-ordering principle, an immediate consequence of this lemma is that every number greater than or equal to 2 has a smallest prime factor.\nlemma exists_least_prime_factor {n : Nat} (h : 2 ≤ n) :\n ∃ (p : Nat), prime_factor p n ∧\n ∀ (q : Nat), prime_factor q n → p ≤ q := by\n set S : Set Nat := {p : Nat | prime_factor p n}\n have h2 : ∃ (p : Nat), p ∈ S := exists_prime_factor n h\n show ∃ (p : Nat), prime_factor p n ∧\n ∀ (q : Nat), prime_factor q n → p ≤ q from well_ord_princ S h2\n done\nTo talk about prime factorizations of positive integers, we’ll need to introduce a new type. If U is any type, then List U is the type of lists of objects of type U. Such a list is written in square brackets, with the entries separated by commas. For example, [3, 7, 1] has type List Nat. The notation [] denotes the empty list, and if a has type U and l has type List U, then a :: l denotes the list consisting of a followed by the entries of l. The empty list is sometimes called the nil list, and the operation of constructing a list a :: l from a and l is called cons (short for construct). Every list can be constructed by applying the cons operation repeatedly, starting with the nil list. For example,\n\n[3, 7, 1] = 3 :: [7, 1] = 3 :: (7 :: [1]) = 3 :: (7 :: (1 :: [])).\n\nIf l has type List U and a has type U, then a ∈ l means that a is one of the entries in the list l. For example, 7 ∈ [3, 7, 1]. Lean knows several theorems about this notation:\n\n@List.not_mem_nil : ∀ {α : Type u_1} (a : α),\n a ∉ []\n\n@List.mem_cons : ∀ {α : Type u_1} {a b : α} {l : List α},\n a ∈ b :: l ↔ a = b ∨ a ∈ l\n\n@List.mem_cons_self : ∀ {α : Type u_1} (a : α) (l : List α),\n a ∈ a :: l\n\n@List.mem_cons_of_mem : ∀ {α : Type u_1} (y : α) {a : α} {l : List α},\n a ∈ l → a ∈ y :: l\n\nThe first two theorems give the conditions under which something is a member of the nil list or a list constructed by cons, and the last two are easy consequences of the second.\nTo define prime factorizations, we must define several concepts first. Some of these concepts are most easily defined recursively.\ndef all_prime (l : List Nat) : Prop := ∀ p ∈ l, prime p\n\ndef nondec (l : List Nat) : Prop :=\n match l with\n | [] => True --Of course, True is a proposition that is always true\n | n :: L => (∀ m ∈ L, n ≤ m) ∧ nondec L\n\ndef nondec_prime_list (l : List Nat) : Prop := all_prime l ∧ nondec l\n\ndef prod (l : List Nat) : Nat :=\n match l with\n | [] => 1\n | n :: L => n * (prod L)\n\ndef prime_factorization (n : Nat) (l : List Nat) : Prop :=\n nondec_prime_list l ∧ prod l = n\nAccording to these definitions, all_prime l means that every member of the list l is prime, nondec l means that every member of l is less than or equal to all later members, prod l is the product of all members of l, and prime_factorization n l means that l is a nondecreasing list of prime numbers whose product is n. It will be convenient to spell out some consequences of these definitions in several lemmas:\nlemma all_prime_nil : all_prime [] := by\n define --Goal : ∀ p ∈ [], prime p\n fix p : Nat\n contrapos --Goal : ¬prime p → p ∉ []\n assume h1 : ¬prime p\n show p ∉ [] from List.not_mem_nil p\n done\n\nlemma all_prime_cons (n : Nat) (L : List Nat) :\n all_prime (n :: L) ↔ prime n ∧ all_prime L := by\n apply Iff.intro\n · -- (→)\n assume h1 : all_prime (n :: L) --Goal : prime n ∧ all_prime L\n define at h1 --h1 : ∀ p ∈ n :: L, prime p\n apply And.intro (h1 n (List.mem_cons_self n L))\n define --Goal : ∀ p ∈ L, prime p\n fix p : Nat\n assume h2 : p ∈ L\n show prime p from h1 p (List.mem_cons_of_mem n h2)\n done\n · -- (←)\n assume h1 : prime n ∧ all_prime L --Goal : all_prime (n :: l)\n define : all_prime L at h1\n define\n fix p : Nat\n assume h2 : p ∈ n :: L\n rewrite [List.mem_cons] at h2 --h2 : p = n ∨ p ∈ L\n by_cases on h2\n · -- Case 1. h2 : p = n\n rewrite [h2]\n show prime n from h1.left\n done\n · -- Case 2. h2 : p ∈ L\n show prime p from h1.right p h2\n done\n done\n done\n\nlemma nondec_nil : nondec [] := by\n define --Goal : True\n trivial --trivial proves some obviously true statements, such as True\n done\n\nlemma nondec_cons (n : Nat) (L : List Nat) :\n nondec (n :: L) ↔ (∀ m ∈ L, n ≤ m) ∧ nondec L := by rfl\n\nlemma prod_nil : prod [] = 1 := by rfl\n\nlemma prod_cons : prod (n :: L) = n * (prod L) := by rfl\nBefore we can prove the existence of prime factorizations, we will need one more fact: every member of a list of natural numbers divides the product of the list. The proof will be by induction on the length of the list, so we will need to know how to work with lengths of lists in Lean. If l is a list, then the length of l is List.length l, which can also be written more briefly as l.length. We’ll need a few more theorems about lists:\n\n@List.length_eq_zero : ∀ {α : Type u_1} {l : List α},\n List.length l = 0 ↔ l = []\n\n@List.length_cons : ∀ {α : Type u_1} (a : α) (as : List α),\n List.length (a :: as) = Nat.succ (List.length as)\n\n@List.exists_cons_of_ne_nil : ∀ {α : Type u_1} {l : List α},\n l ≠ [] → ∃ (b : α) (L : List α), l = b :: L\n\nAnd we’ll need one more lemma, which follows from the three theorems above; we leave the proof as an exercise for you:\nlemma exists_cons_of_length_eq_succ {A : Type}\n {l : List A} {n : Nat} (h : l.length = n + 1) :\n ∃ (a : A) (L : List A), l = a :: L ∧ L.length = n := sorry\nWe can now prove that every member of a list of natural numbers divides the product of the list. After proving it by induction on the length of the list, we restate the lemma in a more convenient form.\nlemma list_elt_dvd_prod_by_length (a : Nat) : ∀ (n : Nat),\n ∀ (l : List Nat), l.length = n → a ∈ l → a ∣ prod l := by\n by_induc\n · --Base Case\n fix l : List Nat\n assume h1 : l.length = 0\n rewrite [List.length_eq_zero] at h1 --h1 : l = []\n rewrite [h1] --Goal : a ∈ [] → a ∣ prod []\n contrapos\n assume h2 : ¬a ∣ prod []\n show a ∉ [] from List.not_mem_nil a\n done\n · -- Induction Step\n fix n : Nat\n assume ih : ∀ (l : List Nat), List.length l = n → a ∈ l → a ∣ prod l\n fix l : List Nat\n assume h1 : l.length = n + 1 --Goal : a ∈ l → a ∣ prod l\n obtain (b : Nat) (h2 : ∃ (L : List Nat),\n l = b :: L ∧ L.length = n) from exists_cons_of_length_eq_succ h1\n obtain (L : List Nat) (h3 : l = b :: L ∧ L.length = n) from h2\n have h4 : a ∈ L → a ∣ prod L := ih L h3.right\n assume h5 : a ∈ l\n rewrite [h3.left, prod_cons] --Goal : a ∣ b * prod L\n rewrite [h3.left, List.mem_cons] at h5 --h5 : a = b ∨ a ∈ L\n by_cases on h5\n · -- Case 1. h5 : a = b\n apply Exists.intro (prod L)\n rewrite [h5]\n rfl\n done\n · -- Case 2. h5 : a ∈ L\n have h6 : a ∣ prod L := h4 h5\n have h7 : prod L ∣ b * prod L := by\n apply Exists.intro b\n ring\n done\n show a ∣ b * prod L from dvd_trans h6 h7\n done\n done\n done\n\nlemma list_elt_dvd_prod {a : Nat} {l : List Nat}\n (h : a ∈ l) : a ∣ prod l := by\n set n : Nat := l.length\n have h1 : l.length = n := by rfl\n show a ∣ prod l from list_elt_dvd_prod_by_length a n l h1 h\n done\nThe proof that every positive integer has a prime factorization is now long but straightforward.\nlemma exists_prime_factorization : ∀ (n : Nat), n ≥ 1 →\n ∃ (l : List Nat), prime_factorization n l := by\n by_strong_induc\n fix n : Nat\n assume ih : ∀ n_1 < n, n_1 ≥ 1 →\n ∃ (l : List Nat), prime_factorization n_1 l\n assume h1 : n ≥ 1\n by_cases h2 : n = 1\n · -- Case 1. h2 : n = 1\n apply Exists.intro []\n define\n apply And.intro\n · -- Proof of nondec_prime_list []\n define\n show all_prime [] ∧ nondec [] from\n And.intro all_prime_nil nondec_nil\n done\n · -- Proof of prod [] = n\n rewrite [prod_nil, h2]\n rfl\n done\n done\n · -- Case 2. h2 : n ≠ 1\n have h3 : n ≥ 2 := lt_of_le_of_ne' h1 h2\n obtain (p : Nat) (h4 : prime_factor p n ∧ ∀ (q : Nat),\n prime_factor q n → p ≤ q) from exists_least_prime_factor h3\n have p_prime_factor : prime_factor p n := h4.left\n define at p_prime_factor\n have p_prime : prime p := p_prime_factor.left\n have p_dvd_n : p ∣ n := p_prime_factor.right\n have p_least : ∀ (q : Nat), prime_factor q n → p ≤ q := h4.right\n obtain (m : Nat) (n_eq_pm : n = p * m) from p_dvd_n\n have h5 : m ≠ 0 := by\n contradict h1 with h6\n have h7 : n = 0 :=\n calc n\n _ = p * m := n_eq_pm\n _ = p * 0 := by rw [h6]\n _ = 0 := by ring\n rewrite [h7]\n decide\n done\n have m_pos : 0 < m := Nat.pos_of_ne_zero h5\n have m_lt_n : m < n := by\n define at p_prime\n show m < n from\n calc m\n _ < m + m := by linarith\n _ = 2 * m := by ring\n _ ≤ p * m := by rel [p_prime.left]\n _ = n := n_eq_pm.symm\n done\n obtain (L : List Nat) (h6 : prime_factorization m L)\n from ih m m_lt_n m_pos\n define at h6\n have ndpl_L : nondec_prime_list L := h6.left\n define at ndpl_L\n apply Exists.intro (p :: L)\n define\n apply And.intro\n · -- Proof of nondec_prime_list (p :: L)\n define\n apply And.intro\n · -- Proof of all_prime (p :: L)\n rewrite [all_prime_cons]\n show prime p ∧ all_prime L from And.intro p_prime ndpl_L.left\n done\n · -- Proof of nondec (p :: L)\n rewrite [nondec_cons]\n apply And.intro _ ndpl_L.right\n fix q : Nat\n assume q_in_L : q ∈ L\n have h7 : q ∣ prod L := list_elt_dvd_prod q_in_L\n rewrite [h6.right] at h7 --h7 : q ∣ m\n have h8 : m ∣ n := by\n apply Exists.intro p\n rewrite [n_eq_pm]\n ring\n done\n have q_dvd_n : q ∣ n := dvd_trans h7 h8\n have ap_L : all_prime L := ndpl_L.left\n define at ap_L\n have q_prime_factor : prime_factor q n :=\n And.intro (ap_L q q_in_L) q_dvd_n\n show p ≤ q from p_least q q_prime_factor\n done\n done\n · -- Proof of prod (p :: L) = n\n rewrite [prod_cons, h6.right, n_eq_pm]\n rfl\n done\n done\n done\nWe now turn to the proof that the prime factorization of a positive integer is unique. In preparation for that proof, HTPI defines two numbers to be relatively prime if their greatest common divisor is 1, and then it uses that concept to prove two theorems, 7.2.2 and 7.2.3. Here are similar proofs of those theorems in Lean, with the proof of one lemma left as an exercise. In the proof of Theorem 7.2.2, we begin, as we did in the proof of Theorem 7.1.6, by converting the goal from natural numbers to integers so that we can use integer algebra.\ndef rel_prime (a b : Nat) : Prop := gcd a b = 1\n\ntheorem Theorem_7_2_2 {a b c : Nat}\n (h1 : c ∣ a * b) (h2 : rel_prime a c) : c ∣ b := by\n rewrite [←Int.natCast_dvd_natCast] --Goal : ↑c ∣ ↑b\n define at h1; define at h2; define\n obtain (j : Nat) (h3 : a * b = c * j) from h1\n set s : Int := gcd_c1 a c\n set t : Int := gcd_c2 a c\n have h4 : s * ↑a + t * ↑c = ↑(gcd a c) := gcd_lin_comb c a\n rewrite [h2, Nat.cast_one] at h4 --h4 : s * ↑a + t * ↑c = (1 : Int)\n apply Exists.intro (s * ↑j + t * ↑b)\n show ↑b = ↑c * (s * ↑j + t * ↑b) from\n calc ↑b\n _ = (1 : Int) * ↑b := (one_mul _).symm\n _ = (s * ↑a + t * ↑c) * ↑b := by rw [h4]\n _ = s * (↑a * ↑b) + t * ↑c * ↑b := by ring\n _ = s * (↑c * ↑j) + t * ↑c * ↑b := by\n rw [←Nat.cast_mul a b, h3, Nat.cast_mul c j]\n _ = ↑c * (s * ↑j + t * ↑b) := by ring\n done\n\nlemma dvd_prime {a p : Nat}\n (h1 : prime p) (h2 : a ∣ p) : a = 1 ∨ a = p := sorry\n\nlemma rel_prime_of_prime_not_dvd {a p : Nat}\n (h1 : prime p) (h2 : ¬p ∣ a) : rel_prime a p := by\n have h3 : gcd a p ∣ a := gcd_dvd_left a p\n have h4 : gcd a p ∣ p := gcd_dvd_right a p\n have h5 : gcd a p = 1 ∨ gcd a p = p := dvd_prime h1 h4\n have h6 : gcd a p ≠ p := by\n contradict h2 with h6\n rewrite [h6] at h3\n show p ∣ a from h3\n done\n disj_syll h5 h6\n show rel_prime a p from h5\n done\n\ntheorem Theorem_7_2_3 {a b p : Nat}\n (h1 : prime p) (h2 : p ∣ a * b) : p ∣ a ∨ p ∣ b := by\n or_right with h3\n have h4 : rel_prime a p := rel_prime_of_prime_not_dvd h1 h3\n show p ∣ b from Theorem_7_2_2 h2 h4\n done\nTheorem 7.2.4 in HTPI extends Theorem 7.2.3 to show that if a prime number divides the product of a list of natural numbers, then it divides one of the numbers in the list. (Theorem 7.2.3 is the case of a list of length two.) The proof in HTPI is by induction on the length of the list, and we could use that method to prove the theorem in Lean. But look back at our proof of the lemma list_elt_dvd_prod_by_length, which also used induction on the length of a list. In the base case, we ended up proving that the nil list has the property stated in the lemma, and in the induction step we proved that if a list L has the property, then so does any list of the form b :: L. We could think of this as a kind of “induction on lists.” As we observed earlier, every list can be constructed by starting with the nil list and applying cons finitely many times. It follows that if the nil list has some property, and applying the cons operation to a list with the property produces another list with the property, then all lists have the property. (In fact, a similar principle was at work in our recursive definitions of nondec l and prod l.)\nLean has a theorem called List.rec that can be used to justify induction on lists. This is a little more convenient than induction on the length of a list, so we’ll use it to prove Theorem 7.2.4. The proof uses two lemmas, whose proofs we leave as exercises for you.\nlemma eq_one_of_dvd_one {n : Nat} (h : n ∣ 1) : n = 1 := sorry\n\nlemma prime_not_one {p : Nat} (h : prime p) : p ≠ 1 := sorry\n\ntheorem Theorem_7_2_4 {p : Nat} (h1 : prime p) :\n ∀ (l : List Nat), p ∣ prod l → ∃ a ∈ l, p ∣ a := by\n apply List.rec\n · -- Base Case. Goal : p ∣ prod [] → ∃ a ∈ [], p ∣ a\n rewrite [prod_nil]\n assume h2 : p ∣ 1\n show ∃ a ∈ [], p ∣ a from\n absurd (eq_one_of_dvd_one h2) (prime_not_one h1)\n done\n · -- Induction Step\n fix b : Nat\n fix L : List Nat\n assume ih : p ∣ prod L → ∃ a ∈ L, p ∣ a\n --Goal : p ∣ prod (b :: L) → ∃ a ∈ b :: L, p ∣ a\n assume h2 : p ∣ prod (b :: L)\n rewrite [prod_cons] at h2\n have h3 : p ∣ b ∨ p ∣ prod L := Theorem_7_2_3 h1 h2\n by_cases on h3\n · -- Case 1. h3 : p ∣ b\n apply Exists.intro b\n show b ∈ b :: L ∧ p ∣ b from\n And.intro (List.mem_cons_self b L) h3\n done\n · -- Case 2. h3 : p ∣ prod L\n obtain (a : Nat) (h4 : a ∈ L ∧ p ∣ a) from ih h3\n apply Exists.intro a\n show a ∈ b :: L ∧ p ∣ a from\n And.intro (List.mem_cons_of_mem b h4.left) h4.right\n done\n done\n done\nIn Theorem 7.2.4, if all members of the list l are prime, then we can conclude not merely that p divides some member of l, but that p is one of the members.\nlemma prime_in_list {p : Nat} {l : List Nat}\n (h1 : prime p) (h2 : all_prime l) (h3 : p ∣ prod l) : p ∈ l := by\n obtain (a : Nat) (h4 : a ∈ l ∧ p ∣ a) from Theorem_7_2_4 h1 l h3\n define at h2\n have h5 : prime a := h2 a h4.left\n have h6 : p = 1 ∨ p = a := dvd_prime h5 h4.right\n disj_syll h6 (prime_not_one h1)\n rewrite [h6]\n show a ∈ l from h4.left\n done\nThe uniqueness of prime factorizations follows from Theorem 7.2.5 of HTPI, which says that if two nondecreasing lists of prime numbers have the same product, then the two lists must be the same. In HTPI, a key step in the proof of Theorem 7.2.5 is to show that if two nondecreasing lists of prime numbers have the same product, then the last entry of one list is less than or equal to the last entry of the other. In Lean, because of the way the cons operation works, it is easier to work with the first entries of the lists.\nlemma first_le_first {p q : Nat} {l m : List Nat}\n (h1 : nondec_prime_list (p :: l)) (h2 : nondec_prime_list (q :: m))\n (h3 : prod (p :: l) = prod (q :: m)) : p ≤ q := by\n define at h1; define at h2\n have h4 : q ∣ prod (p :: l) := by\n define\n apply Exists.intro (prod m)\n rewrite [←prod_cons]\n show prod (p :: l) = prod (q :: m) from h3\n done\n have h5 : all_prime (q :: m) := h2.left\n rewrite [all_prime_cons] at h5\n have h6 : q ∈ p :: l := prime_in_list h5.left h1.left h4\n have h7 : nondec (p :: l) := h1.right\n rewrite [nondec_cons] at h7\n rewrite [List.mem_cons] at h6\n by_cases on h6\n · -- Case 1. h6 : q = p\n linarith\n done\n · -- Case 2. h6 : q ∈ l\n have h8 : ∀ m ∈ l, p ≤ m := h7.left\n show p ≤ q from h8 q h6\n done\n done\nThe proof of Theorem 7.2.5 is another proof by induction on lists. It uses a few more lemmas whose proofs we leave as exercises.\nlemma nondec_prime_list_tail {p : Nat} {l : List Nat}\n (h : nondec_prime_list (p :: l)) : nondec_prime_list l := sorry\n\nlemma cons_prod_not_one {p : Nat} {l : List Nat}\n (h : nondec_prime_list (p :: l)) : prod (p :: l) ≠ 1 := sorry\n\nlemma list_nil_iff_prod_one {l : List Nat} (h : nondec_prime_list l) :\n l = [] ↔ prod l = 1 := sorry\n\nlemma prime_pos {p : Nat} (h : prime p) : p > 0 := sorry\n\ntheorem Theorem_7_2_5 : ∀ (l1 l2 : List Nat),\n nondec_prime_list l1 → nondec_prime_list l2 →\n prod l1 = prod l2 → l1 = l2 := by\n apply List.rec\n · -- Base Case. Goal : ∀ (l2 : List Nat), nondec_prime_list [] →\n -- nondec_prime_list l2 → prod [] = prod l2 → [] = l2\n fix l2 : List Nat\n assume h1 : nondec_prime_list []\n assume h2 : nondec_prime_list l2\n assume h3 : prod [] = prod l2\n rewrite [prod_nil, eq_comm, ←list_nil_iff_prod_one h2] at h3\n show [] = l2 from h3.symm\n done\n · -- Induction Step\n fix p : Nat\n fix L1 : List Nat\n assume ih : ∀ (L2 : List Nat), nondec_prime_list L1 →\n nondec_prime_list L2 → prod L1 = prod L2 → L1 = L2\n -- Goal : ∀ (l2 : List Nat), nondec_prime_list (p :: L1) →\n -- nondec_prime_list l2 → prod (p :: L1) = prod l2 → p :: L1 = l2\n fix l2 : List Nat\n assume h1 : nondec_prime_list (p :: L1)\n assume h2 : nondec_prime_list l2\n assume h3 : prod (p :: L1) = prod l2\n have h4 : ¬prod (p :: L1) = 1 := cons_prod_not_one h1\n rewrite [h3, ←list_nil_iff_prod_one h2] at h4\n obtain (q : Nat) (h5 : ∃ (L : List Nat), l2 = q :: L) from\n List.exists_cons_of_ne_nil h4\n obtain (L2 : List Nat) (h6 : l2 = q :: L2) from h5\n rewrite [h6] at h2 --h2 : nondec_prime_list (q :: L2)\n rewrite [h6] at h3 --h3 : prod (p :: L1) = prod (q :: L2)\n have h7 : p ≤ q := first_le_first h1 h2 h3\n have h8 : q ≤ p := first_le_first h2 h1 h3.symm\n have h9 : p = q := by linarith\n rewrite [h9, prod_cons, prod_cons] at h3\n --h3 : q * prod L1 = q * prod L2\n have h10 : nondec_prime_list L1 := nondec_prime_list_tail h1\n have h11 : nondec_prime_list L2 := nondec_prime_list_tail h2\n define at h2\n have h12 : all_prime (q :: L2) := h2.left\n rewrite [all_prime_cons] at h12\n have h13 : q > 0 := prime_pos h12.left\n have h14 : prod L1 = prod L2 := Nat.eq_of_mul_eq_mul_left h13 h3\n have h15 : L1 = L2 := ih L2 h10 h11 h14\n rewrite [h6, h9, h15]\n rfl\n done\n done\nPutting it all together, we can finally prove the fundamental theorem of arithmetic, which is stated as Theorem 7.2.6 in HTPI:\ntheorem fund_thm_arith (n : Nat) (h : n ≥ 1) :\n ∃! (l : List Nat), prime_factorization n l := by\n exists_unique\n · -- Existence\n show ∃ (l : List Nat), prime_factorization n l from\n exists_prime_factorization n h\n done\n · -- Uniqueness\n fix l1 : List Nat; fix l2 : List Nat\n assume h1 : prime_factorization n l1\n assume h2 : prime_factorization n l2\n define at h1; define at h2\n have h3 : prod l1 = n := h1.right\n rewrite [←h2.right] at h3\n show l1 = l2 from Theorem_7_2_5 l1 l2 h1.left h2.left h3\n done\n done\n\nExercises\n\nlemma dvd_prime {a p : Nat}\n (h1 : prime p) (h2 : a ∣ p) : a = 1 ∨ a = p := sorry\n\n\n--Hints: Start with apply List.rec.\n--You may find the theorem mul_ne_zero useful.\ntheorem prod_nonzero_nonzero : ∀ (l : List Nat),\n (∀ a ∈ l, a ≠ 0) → prod l ≠ 0 := sorry\n\n\ntheorem rel_prime_iff_no_common_factor (a b : Nat) :\n rel_prime a b ↔ ¬∃ (p : Nat), prime p ∧ p ∣ a ∧ p ∣ b := sorry\n\n\ntheorem rel_prime_symm {a b : Nat} (h : rel_prime a b) :\n rel_prime b a := sorry\n\n\nlemma in_prime_factorization_iff_prime_factor {a : Nat} {l : List Nat}\n (h1 : prime_factorization a l) (p : Nat) :\n p ∈ l ↔ prime_factor p a := sorry\n\n\ntheorem Exercise_7_2_5 {a b : Nat} {l m : List Nat}\n (h1 : prime_factorization a l) (h2 : prime_factorization b m) :\n rel_prime a b ↔ (¬∃ (p : Nat), p ∈ l ∧ p ∈ m) := sorry\n\n\ntheorem Exercise_7_2_6 (a b : Nat) :\n rel_prime a b ↔ ∃ (s t : Int), s * a + t * b = 1 := sorry\n\n\ntheorem Exercise_7_2_7 {a b a' b' : Nat}\n (h1 : rel_prime a b) (h2 : a' ∣ a) (h3 : b' ∣ b) :\n rel_prime a' b' := sorry\n\n\ntheorem Exercise_7_2_9 {a b j k : Nat}\n (h1 : gcd a b ≠ 0) (h2 : a = j * gcd a b) (h3 : b = k * gcd a b) :\n rel_prime j k := sorry\n\n\ntheorem Exercise_7_2_17a (a b c : Nat) :\n gcd a (b * c) ∣ gcd a b * gcd a c := sorry" }, { "objectID": "Chap7.html#modular-arithmetic", @@ -319,21 +319,21 @@ "href": "Chap7.html#public-key-cryptography", "title": "7  Number Theory", "section": "7.5. Public-Key Cryptography", - "text": "7.5. Public-Key Cryptography\nSection 7.5 of HTPI discusses the RSA public-key cryptography system. The system is based on the following theorem:\ntheorem Theorem_7_5_1 (p q n e d k m c : Nat)\n (p_prime : prime p) (q_prime : prime q) (p_ne_q : p ≠ q)\n (n_pq : n = p * q) (ed_congr_1 : e * d = k * (p - 1) * (q - 1) + 1)\n (h1 : [m]_n ^ e = [c]_n) : [c]_n ^ d = [m]_n\nFor an explanation of how the RSA system works and why Theorem_7_5_1 justifies it, see HTPI. Here we will focus on proving the theorem in Lean.\nWe will be applying Euler’s theorem to the prime numbers p and q, so we will need to know how to compute phi p and phi q. Fortunately, there is a simple formula we can use.\nlemma num_rp_prime {p : Nat} (h1 : prime p) :\n ∀ k < p, num_rp_below p (k + 1) = k := sorry\n\nlemma phi_prime {p : Nat} (h1 : prime p) : phi p = p - 1 := by\n have h2 : 1 ≤ p := prime_pos h1\n have h3 : p - 1 + 1 = p := Nat.sub_add_cancel h2\n have h4 : p - 1 < p := by linarith\n have h5 : num_rp_below p (p - 1 + 1) = p - 1 :=\n num_rp_prime h1 (p - 1) h4\n rewrite [h3] at h5\n show phi p = p - 1 from h5\n done\nWe will also need to use Lemma 7.4.5 from HTPI. To prove that lemma in Lean, we will use Theorem_7_2_2, which says that for natural numbers a, b, and c, if c ∣ a * b and c and a are relatively prime, then c ∣ b. But we will need to extend the theorem to allow b to be an integer rather than a natural number. To prove this extension, we use the Lean function Int.natAbs : Int → Nat, which computes the absolute value of an integer. Lean knows several theorems about this function:\n\n@Int.coe_nat_dvd_left : ∀ {n : ℕ} {z : ℤ}, ↑n ∣ z ↔ n ∣ Int.natAbs z\n\nInt.natAbs_mul : ∀ (a b : ℤ),\n Int.natAbs (a * b) = Int.natAbs a * Int.natAbs b\n\nInt.natAbs_ofNat : ∀ (n : ℕ), Int.natAbs ↑n = n\n\nWith the help of these theorems, our extended version of Theorem_7_2_2 follows easily from the original version:\ntheorem Theorem_7_2_2_Int {a c : Nat} {b : Int}\n (h1 : ↑c ∣ ↑a * b) (h2 : rel_prime a c) : ↑c ∣ b := by\n rewrite [Int.coe_nat_dvd_left, Int.natAbs_mul,\n Int.natAbs_ofNat] at h1 --h1 : c ∣ a * Int.natAbs b\n rewrite [Int.coe_nat_dvd_left] --Goal : c ∣ Int.natAbs b\n show c ∣ Int.natAbs b from Theorem_7_2_2 h1 h2\n done\nWith that preparation, we can now prove Lemma_7_4_5.\nlemma Lemma_7_4_5 {m n : Nat} (a b : Int) (h1 : rel_prime m n) :\n a ≡ b (MOD m * n) ↔ a ≡ b (MOD m) ∧ a ≡ b (MOD n) := by\n apply Iff.intro\n · -- (→)\n assume h2 : a ≡ b (MOD m * n)\n obtain (j : Int) (h3 : a - b = (m * n) * j) from h2\n apply And.intro\n · -- Proof of a ≡ b (MOD m)\n apply Exists.intro (n * j)\n show a - b = m * (n * j) from\n calc a - b\n _ = m * n * j := h3\n _ = m * (n * j) := by ring\n done\n · -- Proof of a ≡ b (MOD n)\n apply Exists.intro (m * j)\n show a - b = n * (m * j) from\n calc a - b\n _ = m * n * j := h3\n _ = n * (m * j) := by ring\n done\n done\n · -- (←)\n assume h2 : a ≡ b (MOD m) ∧ a ≡ b (MOD n)\n obtain (j : Int) (h3 : a - b = m * j) from h2.left\n have h4 : (↑n : Int) ∣ a - b := h2.right\n rewrite [h3] at h4 --h4 : ↑n ∣ ↑m * j\n have h5 : ↑n ∣ j := Theorem_7_2_2_Int h4 h1\n obtain (k : Int) (h6 : j = n * k) from h5\n apply Exists.intro k --Goal : a - b = ↑(m * n) * k\n rewrite [Nat.cast_mul] --Goal : a - b = ↑m * ↑n * k\n show a - b = (m * n) * k from\n calc a - b\n _ = m * j := h3\n _ = m * (n * k) := by rw [h6]\n _ = (m * n) * k := by ring\n done\n done\nFinally, we will need an exercise from Section 7.2, and we will need to know NeZero p for prime numbers p:\ntheorem rel_prime_symm {a b : Nat} (h : rel_prime a b) :\n rel_prime b a := sorry\n\nlemma prime_NeZero {p : Nat} (h : prime p) : NeZero p := by\n rewrite [neZero_iff] --Goal : p ≠ 0\n define at h\n linarith\n done\nMuch of the reasoning about modular arithmetic that we need for the proof of Theorem_7_5_1 is contained in a technical lemma:\nlemma Lemma_7_5_1 {p e d m c s : Nat} {t : Int}\n (h1 : prime p) (h2 : e * d = (p - 1) * s + 1)\n (h3 : m ^ e - c = p * t) :\n c ^ d ≡ m (MOD p) := by\n have h4 : m ^ e ≡ c (MOD p) := Exists.intro t h3\n have h5 : [m ^ e]_p = [c]_p := (cc_eq_iff_congr _ _ _).rtl h4\n rewrite [←Exercise_7_4_5_Nat] at h5 --h5 : [m]_p ^ e = [c]_p\n by_cases h6 : p ∣ m\n · -- Case 1. h6 : p ∣ m\n have h7 : m ≡ 0 (MOD p) := by\n obtain (j : Nat) (h8 : m = p * j) from h6\n apply Exists.intro (↑j : Int) --Goal : ↑m - 0 = ↑p * ↑j\n rewrite [h8, Nat.cast_mul]\n ring\n done\n have h8 : [m]_p = [0]_p := (cc_eq_iff_congr _ _ _).rtl h7\n have h9 : 0 < (e * d) := by\n rewrite [h2]\n have h10 : 0 ≤ (p - 1) * s := Nat.zero_le _\n linarith\n done\n have h10 : (0 : Int) ^ (e * d) = 0 := zero_pow h9\n have h11 : [c ^ d]_p = [m]_p :=\n calc [c ^ d]_p\n _ = [c]_p ^ d := by rw [Exercise_7_4_5_Nat]\n _ = ([m]_p ^ e) ^ d := by rw [h5]\n _ = [m]_p ^ (e * d) := by ring\n _ = [0]_p ^ (e * d) := by rw [h8]\n _ = [0 ^ (e * d)]_p := Exercise_7_4_5_Int _ _ _\n _ = [0]_p := by rw [h10]\n _ = [m]_p := by rw [h8]\n show c ^ d ≡ m (MOD p) from (cc_eq_iff_congr _ _ _).ltr h11\n done\n · -- Case 2. h6 : ¬p ∣ m\n have h7 : rel_prime m p := rel_prime_of_prime_not_dvd h1 h6\n have h8 : rel_prime p m := rel_prime_symm h7\n have h9 : NeZero p := prime_NeZero h1\n have h10 : (1 : Int) ^ s = 1 := by ring\n have h11 : [c ^ d]_p = [m]_p :=\n calc [c ^ d]_p\n _ = [c]_p ^ d := by rw [Exercise_7_4_5_Nat]\n _ = ([m]_p ^ e) ^ d := by rw [h5]\n _ = [m]_p ^ (e * d) := by ring\n _ = [m]_p ^ ((p - 1) * s + 1) := by rw [h2]\n _ = ([m]_p ^ (p - 1)) ^ s * [m]_p := by ring\n _ = ([m]_p ^ (phi p)) ^ s * [m]_p := by rw [phi_prime h1]\n _ = [1]_p ^ s * [m]_p := by rw [Theorem_7_4_2 h8]\n _ = [1 ^ s]_p * [m]_p := by rw [Exercise_7_4_5_Int]\n _ = [1]_p * [m]_p := by rw [h10]\n _ = [m]_p * [1]_p := by ring\n _ = [m]_p := Theorem_7_3_6_7 _\n show c ^ d ≡ m (MOD p) from (cc_eq_iff_congr _ _ _).ltr h11\n done\n done\nHere, finally, is the proof of Theorem_7_5_1:\ntheorem Theorem_7_5_1 (p q n e d k m c : Nat)\n (p_prime : prime p) (q_prime : prime q) (p_ne_q : p ≠ q)\n (n_pq : n = p * q) (ed_congr_1 : e * d = k * (p - 1) * (q - 1) + 1)\n (h1 : [m]_n ^ e = [c]_n) : [c]_n ^ d = [m]_n := by\n rewrite [Exercise_7_4_5_Nat, cc_eq_iff_congr] at h1\n --h1 : m ^ e ≡ c (MOD n)\n rewrite [Exercise_7_4_5_Nat, cc_eq_iff_congr]\n --Goal : c ^ d ≡ m (MOD n)\n obtain (j : Int) (h2 : m ^ e - c = n * j) from h1\n rewrite [n_pq, Nat.cast_mul] at h2\n --h2 : m ^ e - c = p * q * j\n have h3 : e * d = (p - 1) * (k * (q - 1)) + 1 := by\n rewrite [ed_congr_1]\n ring\n done\n have h4 : m ^ e - c = p * (q * j) := by\n rewrite [h2]\n ring\n done\n have congr_p : c ^ d ≡ m (MOD p) := Lemma_7_5_1 p_prime h3 h4\n have h5 : e * d = (q - 1) * (k * (p - 1)) + 1 := by\n rewrite [ed_congr_1]\n ring\n done\n have h6 : m ^ e - c = q * (p * j) := by\n rewrite [h2]\n ring\n done\n have congr_q : c ^ d ≡ m (MOD q) := Lemma_7_5_1 q_prime h5 h6\n have h7 : ¬q ∣ p := by\n by_contra h8\n have h9 : q = 1 ∨ q = p := dvd_prime p_prime h8\n disj_syll h9 (prime_not_one q_prime)\n show False from p_ne_q h9.symm\n done\n have h8 : rel_prime p q := rel_prime_of_prime_not_dvd q_prime h7\n rewrite [n_pq, Lemma_7_4_5 _ _ h8]\n show c ^ d ≡ m (MOD p) ∧ c ^ d ≡ m (MOD q) from\n And.intro congr_p congr_q\n done\n\nExercises\n\n--Hint: Use induction.\nlemma num_rp_prime {p : Nat} (h1 : prime p) :\n ∀ k < p, num_rp_below p (k + 1) = k := sorry\n\n\nlemma three_prime : prime 3 := sorry\n\n\n--Hint: Use the previous exercise, Exercise_7_2_7, and Theorem_7_4_2.\ntheorem Exercise_7_5_13a (a : Nat) (h1 : rel_prime 561 a) :\n a ^ 560 ≡ 1 (MOD 3) := sorry\n\n\n--Hint: Imitate the way Theorem_7_2_2_Int was proven from Theorem_7_2_2.\nlemma Theorem_7_2_3_Int {p : Nat} {a b : Int}\n (h1 : prime p) (h2 : ↑p ∣ a * b) : ↑p ∣ a ∨ ↑p ∣ b := sorry\n\n\n--Hint: Use the previous exercise.\ntheorem Exercise_7_5_14b (n : Nat) (b : Int)\n (h1 : prime n) (h2 : b ^ 2 ≡ 1 (MOD n)) :\n b ≡ 1 (MOD n) ∨ b ≡ -1 (MOD n) := sorry" + "text": "7.5. Public-Key Cryptography\nSection 7.5 of HTPI discusses the RSA public-key cryptography system. The system is based on the following theorem:\ntheorem Theorem_7_5_1 (p q n e d k m c : Nat)\n (p_prime : prime p) (q_prime : prime q) (p_ne_q : p ≠ q)\n (n_pq : n = p * q) (ed_congr_1 : e * d = k * (p - 1) * (q - 1) + 1)\n (h1 : [m]_n ^ e = [c]_n) : [c]_n ^ d = [m]_n\nFor an explanation of how the RSA system works and why Theorem_7_5_1 justifies it, see HTPI. Here we will focus on proving the theorem in Lean.\nWe will be applying Euler’s theorem to the prime numbers p and q, so we will need to know how to compute phi p and phi q. Fortunately, there is a simple formula we can use.\nlemma num_rp_prime {p : Nat} (h1 : prime p) :\n ∀ k < p, num_rp_below p (k + 1) = k := sorry\n\nlemma phi_prime {p : Nat} (h1 : prime p) : phi p = p - 1 := by\n have h2 : 1 ≤ p := prime_pos h1\n have h3 : p - 1 + 1 = p := Nat.sub_add_cancel h2\n have h4 : p - 1 < p := by linarith\n have h5 : num_rp_below p (p - 1 + 1) = p - 1 :=\n num_rp_prime h1 (p - 1) h4\n rewrite [h3] at h5\n show phi p = p - 1 from h5\n done\nWe will also need to use Lemma 7.4.5 from HTPI. To prove that lemma in Lean, we will use Theorem_7_2_2, which says that for natural numbers a, b, and c, if c ∣ a * b and c and a are relatively prime, then c ∣ b. But we will need to extend the theorem to allow b to be an integer rather than a natural number. To prove this extension, we use the Lean function Int.natAbs : Int → Nat, which computes the absolute value of an integer. Lean knows several theorems about this function:\n\n@Int.natCast_dvd : ∀ {n : ℤ} {m : ℕ}, ↑m ∣ n ↔ m ∣ Int.natAbs n\n\nInt.natAbs_mul : ∀ (a b : ℤ),\n Int.natAbs (a * b) = Int.natAbs a * Int.natAbs b\n\nInt.natAbs_ofNat : ∀ (n : ℕ), Int.natAbs ↑n = n\n\nWith the help of these theorems, our extended version of Theorem_7_2_2 follows easily from the original version:\ntheorem Theorem_7_2_2_Int {a c : Nat} {b : Int}\n (h1 : ↑c ∣ ↑a * b) (h2 : rel_prime a c) : ↑c ∣ b := by\n rewrite [Int.natCast_dvd, Int.natAbs_mul,\n Int.natAbs_ofNat] at h1 --h1 : c ∣ a * Int.natAbs b\n rewrite [Int.natCast_dvd] --Goal : c ∣ Int.natAbs b\n show c ∣ Int.natAbs b from Theorem_7_2_2 h1 h2\n done\nWith that preparation, we can now prove Lemma_7_4_5.\nlemma Lemma_7_4_5 {m n : Nat} (a b : Int) (h1 : rel_prime m n) :\n a ≡ b (MOD m * n) ↔ a ≡ b (MOD m) ∧ a ≡ b (MOD n) := by\n apply Iff.intro\n · -- (→)\n assume h2 : a ≡ b (MOD m * n)\n obtain (j : Int) (h3 : a - b = (m * n) * j) from h2\n apply And.intro\n · -- Proof of a ≡ b (MOD m)\n apply Exists.intro (n * j)\n show a - b = m * (n * j) from\n calc a - b\n _ = m * n * j := h3\n _ = m * (n * j) := by ring\n done\n · -- Proof of a ≡ b (MOD n)\n apply Exists.intro (m * j)\n show a - b = n * (m * j) from\n calc a - b\n _ = m * n * j := h3\n _ = n * (m * j) := by ring\n done\n done\n · -- (←)\n assume h2 : a ≡ b (MOD m) ∧ a ≡ b (MOD n)\n obtain (j : Int) (h3 : a - b = m * j) from h2.left\n have h4 : (↑n : Int) ∣ a - b := h2.right\n rewrite [h3] at h4 --h4 : ↑n ∣ ↑m * j\n have h5 : ↑n ∣ j := Theorem_7_2_2_Int h4 h1\n obtain (k : Int) (h6 : j = n * k) from h5\n apply Exists.intro k --Goal : a - b = ↑(m * n) * k\n rewrite [Nat.cast_mul] --Goal : a - b = ↑m * ↑n * k\n show a - b = (m * n) * k from\n calc a - b\n _ = m * j := h3\n _ = m * (n * k) := by rw [h6]\n _ = (m * n) * k := by ring\n done\n done\nFinally, we will need an exercise from Section 7.2, and we will need to know NeZero p for prime numbers p:\ntheorem rel_prime_symm {a b : Nat} (h : rel_prime a b) :\n rel_prime b a := sorry\n\nlemma prime_NeZero {p : Nat} (h : prime p) : NeZero p := by\n rewrite [neZero_iff] --Goal : p ≠ 0\n define at h\n linarith\n done\nMuch of the reasoning about modular arithmetic that we need for the proof of Theorem_7_5_1 is contained in a technical lemma:\nlemma Lemma_7_5_1 {p e d m c s : Nat} {t : Int}\n (h1 : prime p) (h2 : e * d = (p - 1) * s + 1)\n (h3 : m ^ e - c = p * t) :\n c ^ d ≡ m (MOD p) := by\n have h4 : m ^ e ≡ c (MOD p) := Exists.intro t h3\n have h5 : [m ^ e]_p = [c]_p := (cc_eq_iff_congr _ _ _).rtl h4\n rewrite [←Exercise_7_4_5_Nat] at h5 --h5 : [m]_p ^ e = [c]_p\n by_cases h6 : p ∣ m\n · -- Case 1. h6 : p ∣ m\n have h7 : m ≡ 0 (MOD p) := by\n obtain (j : Nat) (h8 : m = p * j) from h6\n apply Exists.intro (↑j : Int) --Goal : ↑m - 0 = ↑p * ↑j\n rewrite [h8, Nat.cast_mul]\n ring\n done\n have h8 : [m]_p = [0]_p := (cc_eq_iff_congr _ _ _).rtl h7\n have h9 : e * d ≠ 0 := by\n rewrite [h2]\n show (p - 1) * s + 1 ≠ 0 from Nat.add_one_ne_zero _\n done\n have h10 : (0 : Int) ^ (e * d) = 0 := zero_pow h9\n have h11 : [c ^ d]_p = [m]_p :=\n calc [c ^ d]_p\n _ = [c]_p ^ d := by rw [Exercise_7_4_5_Nat]\n _ = ([m]_p ^ e) ^ d := by rw [h5]\n _ = [m]_p ^ (e * d) := by ring\n _ = [0]_p ^ (e * d) := by rw [h8]\n _ = [0 ^ (e * d)]_p := Exercise_7_4_5_Int _ _ _\n _ = [0]_p := by rw [h10]\n _ = [m]_p := by rw [h8]\n show c ^ d ≡ m (MOD p) from (cc_eq_iff_congr _ _ _).ltr h11\n done\n · -- Case 2. h6 : ¬p ∣ m\n have h7 : rel_prime m p := rel_prime_of_prime_not_dvd h1 h6\n have h8 : rel_prime p m := rel_prime_symm h7\n have h9 : NeZero p := prime_NeZero h1\n have h10 : (1 : Int) ^ s = 1 := by ring\n have h11 : [c ^ d]_p = [m]_p :=\n calc [c ^ d]_p\n _ = [c]_p ^ d := by rw [Exercise_7_4_5_Nat]\n _ = ([m]_p ^ e) ^ d := by rw [h5]\n _ = [m]_p ^ (e * d) := by ring\n _ = [m]_p ^ ((p - 1) * s + 1) := by rw [h2]\n _ = ([m]_p ^ (p - 1)) ^ s * [m]_p := by ring\n _ = ([m]_p ^ (phi p)) ^ s * [m]_p := by rw [phi_prime h1]\n _ = [1]_p ^ s * [m]_p := by rw [Theorem_7_4_2 h8]\n _ = [1 ^ s]_p * [m]_p := by rw [Exercise_7_4_5_Int]\n _ = [1]_p * [m]_p := by rw [h10]\n _ = [m]_p * [1]_p := by ring\n _ = [m]_p := Theorem_7_3_6_7 _\n show c ^ d ≡ m (MOD p) from (cc_eq_iff_congr _ _ _).ltr h11\n done\n done\nHere, finally, is the proof of Theorem_7_5_1:\ntheorem Theorem_7_5_1 (p q n e d k m c : Nat)\n (p_prime : prime p) (q_prime : prime q) (p_ne_q : p ≠ q)\n (n_pq : n = p * q) (ed_congr_1 : e * d = k * (p - 1) * (q - 1) + 1)\n (h1 : [m]_n ^ e = [c]_n) : [c]_n ^ d = [m]_n := by\n rewrite [Exercise_7_4_5_Nat, cc_eq_iff_congr] at h1\n --h1 : m ^ e ≡ c (MOD n)\n rewrite [Exercise_7_4_5_Nat, cc_eq_iff_congr]\n --Goal : c ^ d ≡ m (MOD n)\n obtain (j : Int) (h2 : m ^ e - c = n * j) from h1\n rewrite [n_pq, Nat.cast_mul] at h2\n --h2 : m ^ e - c = p * q * j\n have h3 : e * d = (p - 1) * (k * (q - 1)) + 1 := by\n rewrite [ed_congr_1]\n ring\n done\n have h4 : m ^ e - c = p * (q * j) := by\n rewrite [h2]\n ring\n done\n have congr_p : c ^ d ≡ m (MOD p) := Lemma_7_5_1 p_prime h3 h4\n have h5 : e * d = (q - 1) * (k * (p - 1)) + 1 := by\n rewrite [ed_congr_1]\n ring\n done\n have h6 : m ^ e - c = q * (p * j) := by\n rewrite [h2]\n ring\n done\n have congr_q : c ^ d ≡ m (MOD q) := Lemma_7_5_1 q_prime h5 h6\n have h7 : ¬q ∣ p := by\n by_contra h8\n have h9 : q = 1 ∨ q = p := dvd_prime p_prime h8\n disj_syll h9 (prime_not_one q_prime)\n show False from p_ne_q h9.symm\n done\n have h8 : rel_prime p q := rel_prime_of_prime_not_dvd q_prime h7\n rewrite [n_pq, Lemma_7_4_5 _ _ h8]\n show c ^ d ≡ m (MOD p) ∧ c ^ d ≡ m (MOD q) from\n And.intro congr_p congr_q\n done\n\nExercises\n\n--Hint: Use induction.\nlemma num_rp_prime {p : Nat} (h1 : prime p) :\n ∀ k < p, num_rp_below p (k + 1) = k := sorry\n\n\nlemma three_prime : prime 3 := sorry\n\n\n--Hint: Use the previous exercise, Exercise_7_2_7, and Theorem_7_4_2.\ntheorem Exercise_7_5_13a (a : Nat) (h1 : rel_prime 561 a) :\n a ^ 560 ≡ 1 (MOD 3) := sorry\n\n\n--Hint: Imitate the way Theorem_7_2_2_Int was proven from Theorem_7_2_2.\nlemma Theorem_7_2_3_Int {p : Nat} {a b : Int}\n (h1 : prime p) (h2 : ↑p ∣ a * b) : ↑p ∣ a ∨ ↑p ∣ b := sorry\n\n\n--Hint: Use the previous exercise.\ntheorem Exercise_7_5_14b (n : Nat) (b : Int)\n (h1 : prime n) (h2 : b ^ 2 ≡ 1 (MOD n)) :\n b ≡ 1 (MOD n) ∨ b ≡ -1 (MOD n) := sorry" }, { "objectID": "Chap8.html", "href": "Chap8.html", "title": "8  Infinite Sets", "section": "", - "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\bigtriangleup}\n$$" + "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\mathbin{∆}}\n$$" }, { "objectID": "Chap8.html#equinumerous-sets", "href": "Chap8.html#equinumerous-sets", "title": "8  Infinite Sets", "section": "8.1. Equinumerous Sets", - "text": "8.1. Equinumerous Sets\nChapter 8 of HTPI begins by defining a set \\(A\\) to be equinumerous with a set \\(B\\) if there is a function \\(f : A \\to B\\) that is one-to-one and onto. As we will see, in Lean we will need to phrase this definition somewhat differently. However, we begin by considering some examples of functions that are one-to-one and onto.\nThe first example in HTPI is a one-to-one, onto function from \\(\\mathbb{Z}^+\\) to \\(\\mathbb{Z}\\). We will modify this example slightly to make it a function fnz from Nat to Int:\ndef fnz (n : Nat) : Int := if 2 ∣ n then ↑(n / 2) else -↑((n + 1) / 2)\nNote that, to get a result of type Int, coercion is necessary. We have specified that the coercion should be done after the computation of either n / 2 or (n + 1) / 2, with that computation being done using natural-number arithmetic. Checking a few values of this functions suggests a simple pattern:\n#eval [fnz 0, fnz 1, fnz 2, fnz 3, fnz 4, fnz 5, fnz 6]\n --Answer: [0, -1, 1, -2, 2, -3, 3]\nPerhaps the easiest way to prove that fnz is one-to-one and onto is to define a function that turns out to be its inverse. This time, in order to get the right type for the value of the function, we use the function Int.toNat to convert a nonnegative integer to a natural number.\ndef fzn (a : Int) : Nat :=\n if a ≥ 0 then 2 * Int.toNat a else 2 * Int.toNat (-a) - 1\n\n#eval [fzn 0, fzn (-1), fzn 1, fzn (-2), fzn 2, fzn (-3), fzn 3]\n --Answer: [0, 1, 2, 3, 4, 5, 6]\nTo prove that fzn is the inverse of fnz, we begin by proving lemmas making it easier to compute the values of these functions\nlemma fnz_even (k : Nat) : fnz (2 * k) = ↑k := by\n have h1 : 2 ∣ 2 * k := by\n apply Exists.intro k\n rfl\n done\n have h2 : fnz (2 * k) = if 2 ∣ 2 * k then ↑(2 * k / 2)\n else -↑((2 * k + 1) / 2) := by rfl\n rewrite [if_pos h1] at h2 --h2 : fnz (2 * k) = ↑(2 * k / 2)\n have h3 : 0 < 2 := by linarith\n rewrite [Nat.mul_div_cancel_left k h3] at h2\n show fnz (2 * k) = ↑k from h2\n done\n\nlemma fnz_odd (k : Nat) : fnz (2 * k + 1) = -↑(k + 1) := sorry\n\nlemma fzn_nat (k : Nat) : fzn ↑k = 2 * k := by rfl\n\nlemma fzn_neg_succ_nat (k : Nat) : fzn (-↑(k + 1)) = 2 * k + 1 := by rfl\nUsing these lemmas and reasoning by cases, it is straightforward to prove lemmas confirming that the composition of these functions, in either order, yields the identity function. The cases for the first lemma are based on an exercise from Section 6.1.\nlemma fzn_fnz : fzn ∘ fnz = id := by\n apply funext --Goal : ∀ (x : Nat), (fzn ∘ fnz) x = id x\n fix n : Nat\n rewrite [comp_def] --Goal : fzn (fnz n) = id n\n have h1 : nat_even n ∨ nat_odd n := Exercise_6_1_16a1 n\n by_cases on h1\n · -- Case 1. h1 : nat_even n\n obtain (k : Nat) (h2 : n = 2 * k) from h1\n rewrite [h2, fnz_even, fzn_nat]\n rfl\n done\n · -- Case 2. h1 : nat_odd n\n obtain (k : Nat) (h2 : n = 2 * k + 1) from h1\n rewrite [h2, fnz_odd, fzn_neg_succ_nat]\n rfl\n done\n done\n\nlemma fnz_fzn : fnz ∘ fzn = id := sorry\nBy theorems from Chapter 5, it follows that both fnz and fzn are one-to-one and onto.\nlemma fzn_one_one : one_to_one fzn := Theorem_5_3_3_1 fzn fnz fnz_fzn\n\nlemma fzn_onto : onto fzn := Theorem_5_3_3_2 fzn fnz fzn_fnz\n\nlemma fnz_one_one : one_to_one fnz := Theorem_5_3_3_1 fnz fzn fzn_fnz\n\nlemma fnz_onto : onto fnz := Theorem_5_3_3_2 fnz fzn fnz_fzn\nWe’ll give one more example: a one-to-one, onto function fnnn from Nat × Nat to Nat, whose definition is modeled on a function from \\(\\mathbb{Z}^+ \\times \\mathbb{Z}^+\\) to \\(\\mathbb{Z}^+\\) in HTPI. The definition of fnnn will use numbers of the form k * (k + 1) / 2. These numbers are sometimes called triangular numbers, because they count the number of objects in a triangular grid with k rows.\ndef tri (k : Nat) : Nat := k * (k + 1) / 2\n\ndef fnnn (p : Nat × Nat) : Nat := tri (p.1 + p.2) + p.1\n\nlemma fnnn_def (a b : Nat) : fnnn (a, b) = tri (a + b) + a := by rfl\n\n#eval [fnnn (0, 0), fnnn (0, 1), fnnn (1, 0), fnnn (0, 2), fnnn (1, 1)]\n --Answer: [0, 1, 2, 3, 4]\nTwo simple lemmas about tri will help us prove the important properties of fnnn:\nlemma tri_step (k : Nat) : tri (k + 1) = tri k + k + 1 := sorry\n\nlemma tri_incr {j k : Nat} (h1 : j ≤ k) : tri j ≤ tri k := sorry\n\nlemma le_of_fnnn_eq {a1 b1 a2 b2 : Nat}\n (h1 : fnnn (a1, b1) = fnnn (a2, b2)) : a1 + b1 ≤ a2 + b2 := by\n by_contra h2\n have h3 : a2 + b2 + 1 ≤ a1 + b1 := by linarith\n have h4 : fnnn (a2, b2) < fnnn (a1, b1) :=\n calc fnnn (a2, b2)\n _ = tri (a2 + b2) + a2 := by rfl\n _ < tri (a2 + b2) + (a2 + b2) + 1 := by linarith\n _ = tri (a2 + b2 + 1) := (tri_step _).symm\n _ ≤ tri (a1 + b1) := tri_incr h3\n _ ≤ tri (a1 + b1) + a1 := by linarith\n _ = fnnn (a1, b1) := by rfl\n linarith\n done\n\nlemma fnnn_one_one : one_to_one fnnn := by\n fix (a1, b1) : Nat × Nat\n fix (a2, b2) : Nat × Nat\n assume h1 : fnnn (a1, b1) = fnnn (a2, b2) --Goal : (a1, b1) = (a2, b2)\n have h2 : a1 + b1 ≤ a2 + b2 := le_of_fnnn_eq h1\n have h3 : a2 + b2 ≤ a1 + b1 := le_of_fnnn_eq h1.symm\n have h4 : a1 + b1 = a2 + b2 := by linarith\n rewrite [fnnn_def, fnnn_def, h4] at h1\n --h1 : tri (a2 + b2) + a1 = tri (a2 + b2) + a2\n have h6 : a1 = a2 := Nat.add_left_cancel h1\n rewrite [h6] at h4 --h4 : a2 + b1 = a2 + b2\n have h7 : b1 = b2 := Nat.add_left_cancel h4\n rewrite [h6, h7]\n rfl\n done\n\nlemma fnnn_onto : onto fnnn := by\n define --Goal : ∀ (y : Nat), ∃ (x : Nat × Nat), fnnn x = y\n by_induc\n · -- Base Case\n apply Exists.intro (0, 0)\n rfl\n done\n · -- Induction Step\n fix n : Nat\n assume ih : ∃ (x : Nat × Nat), fnnn x = n\n obtain ((a, b) : Nat × Nat) (h1 : fnnn (a, b) = n) from ih\n by_cases h2 : b = 0\n · -- Case 1. h2 : b = 0\n apply Exists.intro (0, a + 1)\n show fnnn (0, a + 1) = n + 1 from\n calc fnnn (0, a + 1)\n _ = tri (0 + (a + 1)) + 0 := by rfl\n _ = tri (a + 1) := by ring\n _ = tri a + a + 1 := tri_step a\n _ = tri (a + 0) + a + 1 := by ring\n _ = fnnn (a, b) + 1 := by rw [h2, fnnn_def]\n _ = n + 1 := by rw [h1]\n done\n · -- Case 2. h2 : b ≠ 0\n obtain (k : Nat) (h3 : b = k + 1) from\n exists_eq_add_one_of_ne_zero h2\n apply Exists.intro (a + 1, k)\n show fnnn (a + 1, k) = n + 1 from\n calc fnnn (a + 1, k)\n _ = tri (a + 1 + k) + (a + 1) := by rfl\n _ = tri (a + (k + 1)) + a + 1 := by ring\n _ = tri (a + b) + a + 1 := by rw [h3]\n _ = fnnn (a, b) + 1 := by rfl\n _ = n + 1 := by rw [h1]\n done\n done\n done\nDespite these successes with one-to-one, onto functions, we will use a definition of “equinumerous” in Lean that is different from the definition in HTPI. There are two reasons for this change. First of all, the domain of a function in Lean must be a type, but we want to be able to talk about sets being equinumerous. Secondly, Lean expects functions to be computable; it regards the definition of a function as an algorithm for computing the value of the function on any input. This restriction would cause problems with some of our proofs. While there are ways to overcome these difficulties, they would introduce complications that we can avoid by using a different approach.\nSuppose U and V are types, and we have sets A : Set U and B : Set V. We will define A to be equinumerous with B if there is a relation R from U to V that defines a one-to-one correspondence between the elements of A and B. To formulate this precisely, suppose that R has type Rel U V. We will place three requirements on R. First, we require that the relation R should hold only between elements of A and B. We say in this case that R is a relation within A and B:\ndef rel_within {U V : Type} (R : Rel U V) (A : Set U) (B : Set V) : Prop :=\n ∀ ⦃x : U⦄ ⦃y : V⦄, R x y → x ∈ A ∧ y ∈ B\nNotice that in this definition, we have used the same double braces for the quantified variables x and y that were used in the definition of “subset.” This means that x and y are implicit arguments, and therefore if we have h1 : rel_within R A B and h2 : R a b, then h1 h2 is a proof of a ∈ A ∧ b ∈ B. There is no need to specify that a and b are the values to be assigned to x and y; Lean will figure that out for itself. (To type the double braces ⦃ and ⦄, type \\{{ and \\}}. There were cases in previous chapters where it would have been appropriate to use such implicit arguments, but we chose not to do so to avoid confusion. But by now you should be comfortable enough with Lean that you won’t be confused by this new complication.)\nNext, we require that every element of A is related by R to exactly one thing. We say in this case that R is functional on A:\ndef fcnl_on {U V : Type} (R : Rel U V) (A : Set U) : Prop :=\n ∀ ⦃x : U⦄, x ∈ A → ∃! (y : V), R x y\nFinally, we impose the same requirement in the other direction: for every element of B, exactly one thing should be related to it by R. We can express this by saying that the inverse of R is functional on B. In Chapter 4, we defined the inverse of a set of ordered pairs, but we can easily convert this to an operation on relations:\ndef invRel {U V : Type} (R : Rel U V) : Rel V U :=\n RelFromExt (inv (extension R))\n\nlemma invRel_def {U V : Type} (R : Rel U V) (u : U) (v : V) :\n invRel R v u ↔ R u v := by rfl\nWe will call R a matching from A to B if it meets the three requirements above:\ndef matching {U V : Type} (R : Rel U V) (A : Set U) (B : Set V) : Prop :=\n rel_within R A B ∧ fcnl_on R A ∧ fcnl_on (invRel R) B\nFinally, we say that A is equinumerous with B if there is a matching from A to B, and, as in HTPI we introduce the notation A ∼ B to indicate that A is equinumerous with B (to enter the symbol ∼, type \\sim or \\~).\ndef equinum {U V : Type} (A : Set U) (B : Set V) : Prop :=\n ∃ (R : Rel U V), matching R A B\n\nnotation:50 A:50 \" ∼ \" B:50 => equinum A B\nCan the examples at the beginning of this section be used to establish that Int ∼ Nat and Nat × Nat ∼ Nat? Not quite, because Int, Nat, and Nat × Nat are types, not sets. We must talk about the sets of all objects of those types, not the types themselves, so we introduce another definition.\ndef Univ (U : Type) : Set U := {x : U | True}\n\nlemma elt_Univ {U : Type} (u : U) :\n u ∈ Univ U := by trivial\nFor any type U, Univ U is the set of all objects of type U; we might call it the universal set for the type U. Now we can use the functions defined earlier to prove that Univ Int ∼ Univ Nat and Univ (Nat × Nat) ∼ Univ Nat. The do this, we must convert the functions into relations and prove that those relations are matchings. The conversion can be done with the following function.\ndef RelWithinFromFunc {U V : Type} (f : U → V) (A : Set U)\n (x : U) (y : V) : Prop := x ∈ A ∧ f x = y\nThis definition says that if we have f : U → V and A : Set U, then RelWithinFromFunc f A is a relation from U to V that relates any x that is an element of A to f x.\nWe will say that a function is one-to-one on a set A if it satisfies the definition of one-to-one when applied to elements of A:\ndef one_one_on {U V : Type} (f : U → V) (A : Set U) : Prop :=\n ∀ ⦃x1 x2 : U⦄, x1 ∈ A → x2 ∈ A → f x1 = f x2 → x1 = x2\nWith all of this preparation, we can now prove that if f is one-to-one on A, then A is equinumerous with its image under f.\ntheorem equinum_image {U V : Type} {A : Set U} {B : Set V} {f : U → V}\n (h1 : one_one_on f A) (h2 : image f A = B) : A ∼ B := by\n rewrite [←h2]\n define --Goal : ∃ (R : Rel U V), matching R A (image f A)\n set R : Rel U V := RelWithinFromFunc f A\n apply Exists.intro R\n define --Goal : rel_within R A (image f A) ∧\n --fcnl_on R A ∧ fcnl_on (invRel R) (image f A)\n apply And.intro\n · -- Proof of rel_within\n define --Goal : ∀ ⦃x : U⦄ ⦃y : V⦄, R x y → x ∈ A ∧ y ∈ image f A\n fix x : U; fix y : V\n assume h3 : R x y --Goal : x ∈ A ∧ y ∈ image f A\n define at h3 --h3 : x ∈ A ∧ f x = y\n apply And.intro h3.left\n define\n show ∃ x ∈ A, f x = y from Exists.intro x h3\n done\n · -- Proofs of fcnl_ons\n apply And.intro\n · -- Proof of fcnl_on R A\n define --Goal : ∀ ⦃x : U⦄, x ∈ A → ∃! (y : V), R x y\n fix x : U\n assume h3 : x ∈ A\n exists_unique\n · -- Existence\n apply Exists.intro (f x)\n define --Goal : x ∈ A ∧ f x = f x\n apply And.intro h3\n rfl\n done\n · -- Uniqueness\n fix y1 : V; fix y2 : V\n assume h4 : R x y1\n assume h5 : R x y2 --Goal : y1 = y2\n define at h4; define at h5\n --h4 : x ∈ A ∧ f x = y1; h5 : x ∈ A ∧ f x = y2\n rewrite [h4.right] at h5\n show y1 = y2 from h5.right\n done\n done\n · -- Proof of fcnl_on (invRel R) (image f A)\n define --Goal : ∀ ⦃x : V⦄, x ∈ image f A → ∃! (y : U), invRel R x y\n fix y : V\n assume h3 : y ∈ image f A\n obtain (x : U) (h4 : x ∈ A ∧ f x = y) from h3\n exists_unique\n · -- Existence\n apply Exists.intro x\n define\n show x ∈ A ∧ f x = y from h4\n done\n · -- Uniqueness\n fix x1 : U; fix x2 : U\n assume h5 : invRel R y x1\n assume h6 : invRel R y x2\n define at h5; define at h6\n --h5 : x1 ∈ A ∧ f x1 = y; h6 : x2 ∈ A ∧ f x2 = y\n rewrite [←h6.right] at h5\n show x1 = x2 from h1 h5.left h6.left h5.right\n done\n done\n done\n done\nTo apply this result to the functions introduced at the beginning of this section, we will want to use Univ U for the set A in the theorem equinum_image:\nlemma one_one_on_of_one_one {U V : Type} {f : U → V}\n (h : one_to_one f) (A : Set U) : one_one_on f A := sorry\n\ntheorem equinum_Univ {U V : Type} {f : U → V}\n (h1 : one_to_one f) (h2 : onto f) : Univ U ∼ Univ V := by\n have h3 : image f (Univ U) = Univ V := by\n apply Set.ext\n fix v : V\n apply Iff.intro\n · -- (→)\n assume h3 : v ∈ image f (Univ U)\n show v ∈ Univ V from elt_Univ v\n done\n · -- (←)\n assume h3 : v ∈ Univ V\n obtain (u : U) (h4 : f u = v) from h2 v\n apply Exists.intro u\n apply And.intro _ h4\n show u ∈ Univ U from elt_Univ u\n done\n done\n show Univ U ∼ Univ V from\n equinum_image (one_one_on_of_one_one h1 (Univ U)) h3\n done\n\ntheorem Z_equinum_N : Univ Int ∼ Univ Nat :=\n equinum_Univ fzn_one_one fzn_onto\n\ntheorem NxN_equinum_N : Univ (Nat × Nat) ∼ Univ Nat :=\n equinum_Univ fnnn_one_one fnnn_onto\nTheorem 8.1.3 in HTPI shows that ∼ is reflexive, symmetric, and transitive. We’ll prove the three parts of this theorem separately. To prove that ∼ is reflexive, we use the identity function.\nlemma id_one_one_on {U : Type} (A : Set U) : one_one_on id A := sorry\n\nlemma image_id {U : Type} (A : Set U) : image id A = A := sorry\n\ntheorem Theorem_8_1_3_1 {U : Type} (A : Set U) : A ∼ A :=\n equinum_image (id_one_one_on A) (image_id A)\nFor symmetry, we show that the inverse of a matching is also a matching.\nlemma inv_inv {U V : Type} (R : Rel U V) : invRel (invRel R) = R := by rfl\n\nlemma inv_match {U V : Type} {R : Rel U V} {A : Set U} {B : Set V}\n (h : matching R A B) : matching (invRel R) B A := by\n define --Goal : rel_within (invRel R) B A ∧\n --fcnl_on (invRel R) B ∧ fcnl_on (invRel (invRel R)) A\n define at h --h : rel_within R A B ∧ fcnl_on R A ∧ fcnl_on (invRel R) B\n apply And.intro\n · -- Proof that rel_within (invRel R) B A\n define --Goal : ∀ ⦃x : V⦄ ⦃y : U⦄, invRel R x y → x ∈ B ∧ y ∈ A\n fix y : V; fix x : U\n assume h1 : invRel R y x\n define at h1 --h1 : R x y\n have h2 : x ∈ A ∧ y ∈ B := h.left h1\n show y ∈ B ∧ x ∈ A from And.intro h2.right h2.left\n done\n · -- proof that fcnl_on (inv R) B ∧ fcnl_on (inv (inv R)) A\n rewrite [inv_inv]\n show fcnl_on (invRel R) B ∧ fcnl_on R A from\n And.intro h.right.right h.right.left\n done\n done\n\ntheorem Theorem_8_1_3_2 {U V : Type} {A : Set U} {B : Set V}\n (h : A ∼ B) : B ∼ A := by\n obtain (R : Rel U V) (h1 : matching R A B) from h\n apply Exists.intro (invRel R)\n show matching (invRel R) B A from inv_match h1\n done\nThe proof of transitivity is a bit more involved. In this proof, as well as some later proofs, we find it useful to separate out the existence and uniqueness parts of the definition of fcnl_on:\nlemma fcnl_exists {U V : Type} {R : Rel U V} {A : Set U} {x : U}\n (h1 : fcnl_on R A) (h2 : x ∈ A) : ∃ (y : V), R x y := by\n define at h1\n obtain (y : V) (h3 : R x y)\n (h4 : ∀ (y_1 y_2 : V), R x y_1 → R x y_2 → y_1 = y_2) from h1 h2\n show ∃ (y : V), R x y from Exists.intro y h3\n done\n\nlemma fcnl_unique {U V : Type}\n {R : Rel U V} {A : Set U} {x : U} {y1 y2 : V} (h1 : fcnl_on R A)\n (h2 : x ∈ A) (h3 : R x y1) (h4 : R x y2) : y1 = y2 := by\n define at h1\n obtain (z : V) (h5 : R x z)\n (h6 : ∀ (y_1 y_2 : V), R x y_1 → R x y_2 → y_1 = y_2) from h1 h2\n show y1 = y2 from h6 y1 y2 h3 h4\n done\nTo prove transitivity, we will show that a composition of matchings is a matching. Once again we must convert our definition of composition of sets of ordered pairs into an operation on relations. A few preliminary lemmas help with the proof.\ndef compRel {U V W : Type} (S : Rel V W) (R : Rel U V) : Rel U W :=\n RelFromExt (comp (extension S) (extension R))\n\nlemma compRel_def {U V W : Type}\n (S : Rel V W) (R : Rel U V) (u : U) (w : W) :\n compRel S R u w ↔ ∃ (x : V), R u x ∧ S x w := by rfl\n\nlemma inv_comp {U V W : Type} (R : Rel U V) (S : Rel V W) :\n invRel (compRel S R) = compRel (invRel R) (invRel S) := \n calc invRel (compRel S R)\n _ = RelFromExt (inv (comp (extension S) (extension R))) := by rfl\n _ = RelFromExt (comp (inv (extension R)) (inv (extension S))) := by\n rw [Theorem_4_2_5_5]\n _ = compRel (invRel R) (invRel S) := by rfl\n\nlemma comp_fcnl {U V W : Type} {R : Rel U V} {S : Rel V W}\n {A : Set U} {B : Set V} {C : Set W} (h1 : matching R A B)\n (h2 : matching S B C) : fcnl_on (compRel S R) A := by\n define; define at h1; define at h2\n fix a : U\n assume h3 : a ∈ A\n obtain (b : V) (h4 : R a b) from fcnl_exists h1.right.left h3\n have h5 : a ∈ A ∧ b ∈ B := h1.left h4\n obtain (c : W) (h6 : S b c) from fcnl_exists h2.right.left h5.right\n exists_unique\n · -- Existence\n apply Exists.intro c\n rewrite [compRel_def]\n show ∃ (x : V), R a x ∧ S x c from Exists.intro b (And.intro h4 h6)\n done\n · -- Uniqueness\n fix c1 : W; fix c2 : W\n assume h7 : compRel S R a c1\n assume h8 : compRel S R a c2 --Goal : c1 = c2\n rewrite [compRel_def] at h7\n rewrite [compRel_def] at h8\n obtain (b1 : V) (h9 : R a b1 ∧ S b1 c1) from h7\n obtain (b2 : V) (h10 : R a b2 ∧ S b2 c2) from h8\n have h11 : b1 = b := fcnl_unique h1.right.left h3 h9.left h4\n have h12 : b2 = b := fcnl_unique h1.right.left h3 h10.left h4\n rewrite [h11] at h9\n rewrite [h12] at h10\n show c1 = c2 from\n fcnl_unique h2.right.left h5.right h9.right h10.right\n done\n done\n\nlemma comp_match {U V W : Type} {R : Rel U V} {S : Rel V W}\n {A : Set U} {B : Set V} {C : Set W} (h1 : matching R A B)\n (h2 : matching S B C) : matching (compRel S R) A C := by\n define\n apply And.intro\n · -- Proof of rel_within (compRel S R) A C\n define\n fix a : U; fix c : W\n assume h3 : compRel S R a c\n rewrite [compRel_def] at h3\n obtain (b : V) (h4 : R a b ∧ S b c) from h3\n have h5 : a ∈ A ∧ b ∈ B := h1.left h4.left\n have h6 : b ∈ B ∧ c ∈ C := h2.left h4.right\n show a ∈ A ∧ c ∈ C from And.intro h5.left h6.right\n done\n · -- Proof of fcnl_on statements\n apply And.intro\n · -- Proof of fcnl_on (compRel S R) A\n show fcnl_on (compRel S R) A from comp_fcnl h1 h2\n done\n · -- Proof of fcnl_on (invRel (compRel S R)) Z\n rewrite [inv_comp]\n have h3 : matching (invRel R) B A := inv_match h1\n have h4 : matching (invRel S) C B := inv_match h2\n show fcnl_on (compRel (invRel R) (invRel S)) C from comp_fcnl h4 h3\n done\n done\n done\n\ntheorem Theorem_8_1_3_3 {U V W : Type} {A : Set U} {B : Set V} {C : Set W}\n (h1 : A ∼ B) (h2 : B ∼ C) : A ∼ C := by\n obtain (R : Rel U V) (h3 : matching R A B) from h1\n obtain (S : Rel V W) (h4 : matching S B C) from h2\n apply Exists.intro (compRel S R)\n show matching (compRel S R) A C from comp_match h3 h4\n done\nNow that we have a basic understanding of the concept of equinumerous sets, we can use this concept to make a number of definitions. For any natural number \\(n\\), HTPI defines \\(I_n\\) to be the set \\(\\{1, 2, \\ldots, n\\}\\), and then it defines a set to be finite if it is equinumerous with \\(I_n\\), for some \\(n\\). In Lean, it is a bit more convenient to use sets of the form \\(\\{0, 1, \\ldots, n - 1\\}\\). With that small change, we can repeat the definitions of finite, denumerable, and countable in HTPI.\ndef I (n : Nat) : Set Nat := {k : Nat | k < n}\n\nlemma I_def (k n : Nat) : k ∈ I n ↔ k < n := by rfl\n\ndef finite {U : Type} (A : Set U) : Prop :=\n ∃ (n : Nat), I n ∼ A\n\ndef denum {U : Type} (A : Set U) : Prop :=\n Univ Nat ∼ A\n\nlemma denum_def {U : Type} (A : Set U) : denum A ↔ Univ Nat ∼ A := by rfl\n\ndef ctble {U : Type} (A : Set U) : Prop :=\n finite A ∨ denum A\nTheorem 8.1.5 in HTPI gives two useful ways to characterize countable sets. The proof of the theorem in HTPI uses the fact that every set of natural numbers is countable. HTPI gives an intuitive explanation of why this is true, but of course in Lean an intuitive explanation won’t do. So before proving a version of Theorem 8.1.5, we sketch a proof that every set of natural numbers is countable.\nSuppose A has type Set Nat. To prove that A is countable, we will define a relation enum A that “enumerates” the elements of A by relating 0 to the smallest element of A, 1 to the next element of A, 2 to the next, and so on. How do we tell which natural number should be related to any element n of A? Notice that if n is the smallest element of A, then there are 0 elements of A that are smaller than n; if it is second smallest element of A, then there is 1 element of A that is smaller than n; and so on. Thus, enum A should relate a natural number s to n if and only if the number of elements of A that are smaller than n is s. This suggests a plan: First we define a proposition num_elts_below A n s saying that the number of elements of A that are smaller than n is s. Then we use this proposition to define the relation enum A, and finally we show that enum A is a matching that can be used to prove that A is countable.\nThe definition of num_elts_below is recursive. The recursive step relates the number of elements of A below n + 1 to the number of elements below n. There are two possibilities: either n ∈ A and the number of elements below n + 1 is one larger than the number below n, or n ∉ A and the two numbers are the same. (This may remind you of the recursion we used to define num_rp_below in Chapter 7.)\ndef num_elts_below (A : Set Nat) (m s : Nat) : Prop :=\n match m with\n | 0 => s = 0\n | n + 1 => (n ∈ A ∧ 1 ≤ s ∧ num_elts_below A n (s - 1)) ∨\n (n ∉ A ∧ num_elts_below A n s)\n\ndef enum (A : Set Nat) (s n : Nat) : Prop := n ∈ A ∧ num_elts_below A n s\nThe details of the proof that enum A is the required matching are long. We’ll skip them here, but you can find them in the HTPI Lean package.\nlemma neb_exists (A : Set Nat) :\n ∀ (n : Nat), ∃ (s : Nat), num_elts_below A n s := sorry\n\nlemma bdd_subset_nat_match {A : Set Nat} {m s : Nat}\n (h1 : ∀ n ∈ A, n < m) (h2 : num_elts_below A m s) :\n matching (enum A) (I s) A := sorry\n\nlemma bdd_subset_nat {A : Set Nat} {m s : Nat}\n (h1 : ∀ n ∈ A, n < m) (h2 : num_elts_below A m s) :\n I s ∼ A := Exists.intro (enum A) (bdd_subset_nat_match h1 h2)\n\nlemma unbdd_subset_nat_match {A : Set Nat}\n (h1 : ∀ (m : Nat), ∃ n ∈ A, n ≥ m) :\n matching (enum A) (Univ Nat) A := sorry\n\nlemma unbdd_subset_nat {A : Set Nat}\n (h1 : ∀ (m : Nat), ∃ n ∈ A, n ≥ m) :\n denum A := Exists.intro (enum A) (unbdd_subset_nat_match h1)\n\nlemma subset_nat_ctble (A : Set Nat) : ctble A := by\n define --Goal : finite A ∨ denum A\n by_cases h1 : ∃ (m : Nat), ∀ n ∈ A, n < m\n · -- Case 1. h1 : ∃ (m : Nat), ∀ n ∈ A, n < m\n apply Or.inl --Goal : finite A\n obtain (m : Nat) (h2 : ∀ n ∈ A, n < m) from h1\n obtain (s : Nat) (h3 : num_elts_below A m s) from neb_exists A m\n apply Exists.intro s\n show I s ∼ A from bdd_subset_nat h2 h3\n done\n · -- Case 2. h1 : ¬∃ (m : Nat), ∀ n ∈ A, n < m\n apply Or.inr --Goal : denum A\n push_neg at h1\n --This tactic converts h1 to ∀ (m : Nat), ∃ n ∈ A, m ≤ n\n show denum A from unbdd_subset_nat h1\n done\n done\nAs a consequence of our last lemma, we get another characterization of countable sets: a set is countable if and only if it is equinumerous with some subset of the natural numbers:\nlemma ctble_of_equinum_ctble {U V : Type} {A : Set U} {B : Set V}\n (h1 : A ∼ B) (h2 : ctble A) : ctble B := sorry\n\nlemma ctble_iff_equinum_set_nat {U : Type} (A : Set U) : \n ctble A ↔ ∃ (I : Set Nat), I ∼ A := by\n apply Iff.intro\n · -- (→)\n assume h1 : ctble A\n define at h1 --h1 : finite A ∨ denum A\n by_cases on h1\n · -- Case 1. h1 : finite A\n define at h1 --h1 : ∃ (n : Nat), I n ∼ A\n obtain (n : Nat) (h2 : I n ∼ A) from h1\n show ∃ (I : Set Nat), I ∼ A from Exists.intro (I n) h2\n done\n · -- Case 2. h1 : denum A\n rewrite [denum_def] at h1 --h1 : Univ Nat ∼ A\n show ∃ (I : Set Nat), I ∼ A from Exists.intro (Univ Nat) h1\n done\n done\n · -- (←)\n assume h1 : ∃ (I : Set Nat), I ∼ A\n obtain (I : Set Nat) (h2 : I ∼ A) from h1\n have h3 : ctble I := subset_nat_ctble I\n show ctble A from ctble_of_equinum_ctble h2 h3\n done\n done\nWe are now ready to turn to Theorem 8.1.5 in HTPI. The theorem gives two statements that are equivalent to the countability of a set \\(A\\). The first involves a function from the natural numbers to \\(A\\) that is onto. In keeping with our approach in this section, we state a similar characterization involving a relation rather than a function.\ndef unique_val_on_N {U : Type} (R : Rel Nat U) : Prop :=\n ∀ ⦃n : Nat⦄ ⦃x1 x2 : U⦄, R n x1 → R n x2 → x1 = x2\n\ndef nat_rel_onto {U : Type} (R : Rel Nat U) (A : Set U) : Prop :=\n ∀ ⦃x : U⦄, x ∈ A → ∃ (n : Nat), R n x\n\ndef fcnl_onto_from_nat {U : Type} (R : Rel Nat U) (A : Set U) : Prop :=\n unique_val_on_N R ∧ nat_rel_onto R A\nIntuitively, you might think of fcnl_onto_from_nat R A as meaning that the relation R defines a function whose domain is a subset of the natural numbers and whose range contains A.\nThe second characterization of the countability of \\(A\\) in Theorem 8.1.5 involves a function from \\(A\\) to the natural numbers that is one-to-one. Once again, we rephrase this in terms of relations. We define fcnl_one_one_to_nat R A to mean that R defines a function from A to the natural numbers that is one-to-one:\ndef fcnl_one_one_to_nat {U : Type} (R : Rel U Nat) (A : Set U) : Prop :=\n fcnl_on R A ∧ ∀ ⦃x1 x2 : U⦄ ⦃n : Nat⦄,\n (x1 ∈ A ∧ R x1 n) → (x2 ∈ A ∧ R x2 n) → x1 = x2\nOur plan is to prove that if A has type Set U then the following statements are equivalent:\n\nctble A\n∃ (R : Rel Nat U), fcnl_onto_from_nat R A\n∃ (R : Rel U Nat), fcnl_one_one_to_nat R A\n\nAs in HTPI, we will do this by proving 1 → 2 → 3 → 1. Here is the proof of 1 → 2.\ntheorem Theorem_8_1_5_1_to_2 {U : Type} {A : Set U} (h1 : ctble A) :\n ∃ (R : Rel Nat U), fcnl_onto_from_nat R A := by\n rewrite [ctble_iff_equinum_set_nat] at h1\n obtain (I : Set Nat) (h2 : I ∼ A) from h1\n obtain (R : Rel Nat U) (h3 : matching R I A) from h2\n define at h3\n --h3 : rel_within R I A ∧ fcnl_on R I ∧ fcnl_on (invRel R) A\n apply Exists.intro R\n define --Goal : unique_val_on_N R ∧ nat_rel_onto R A\n apply And.intro\n · -- Proof of unique_val_on_N R\n define\n fix n : Nat; fix x1 : U; fix x2 : U\n assume h4 : R n x1\n assume h5 : R n x2 --Goal : x1 = x2\n have h6 : n ∈ I ∧ x1 ∈ A := h3.left h4\n show x1 = x2 from fcnl_unique h3.right.left h6.left h4 h5\n done\n · -- Proof of nat_rel_onto R A\n define\n fix x : U\n assume h4 : x ∈ A --Goal : ∃ (n : Nat), R n x\n show ∃ (n : Nat), R n x from fcnl_exists h3.right.right h4\n done\n done\nFor the proof of 2 → 3, suppose we have A : Set U and S : Rel Nat U, and the statement fcnl_onto_from_nat S A is true. We need to come up with a relation R : Rel U Nat for which we can prove fcnl_one_one_to_nat R A. You might be tempted to try R = invRel S, but there is a problem with this choice: if x ∈ A, there might be multiple natural numbers n such that S n x holds, but we must make sure that there is only one n for which R x n holds. Our solution to this problem will be to define R x n to mean that n is the smallest natural number for which S n x holds. (The proof in HTPI uses a similar idea.) The well ordering principle guarantees that there always is such a smallest element.\ndef least_rel_to {U : Type} (S : Rel Nat U) (x : U) (n : Nat) : Prop :=\n S n x ∧ ∀ (m : Nat), S m x → n ≤ m\n\nlemma exists_least_rel_to {U : Type} {S : Rel Nat U} {x : U}\n (h1 : ∃ (n : Nat), S n x) : ∃ (n : Nat), least_rel_to S x n := by\n set W : Set Nat := {n : Nat | S n x}\n have h2 : ∃ (n : Nat), n ∈ W := h1\n show ∃ (n : Nat), least_rel_to S x n from well_ord_princ W h2\n done\n\ntheorem Theorem_8_1_5_2_to_3 {U : Type} {A : Set U}\n (h1 : ∃ (R : Rel Nat U), fcnl_onto_from_nat R A) :\n ∃ (R : Rel U Nat), fcnl_one_one_to_nat R A := by\n obtain (S : Rel Nat U) (h2 : fcnl_onto_from_nat S A) from h1\n define at h2 --h2 : unique_val_on_N S ∧ nat_rel_onto S A\n set R : Rel U Nat := least_rel_to S\n apply Exists.intro R\n define\n apply And.intro\n · -- Proof of fcnl_on R A\n define\n fix x : U\n assume h4 : x ∈ A --Goal : ∃! (y : Nat), R x y\n exists_unique\n · -- Existence\n have h5 : ∃ (n : Nat), S n x := h2.right h4\n show ∃ (n : Nat), R x n from exists_least_rel_to h5\n done\n · -- Uniqueness\n fix n1 : Nat; fix n2 : Nat\n assume h5 : R x n1\n assume h6 : R x n2 --Goal : n1 = n2\n define at h5 --h5 : S n1 x ∧ ∀ (m : Nat), S m x → n1 ≤ m\n define at h6 --h6 : S n2 x ∧ ∀ (m : Nat), S m x → n2 ≤ m\n have h7 : n1 ≤ n2 := h5.right n2 h6.left\n have h8 : n2 ≤ n1 := h6.right n1 h5.left\n linarith\n done\n done\n · -- Proof of one-to-one\n fix x1 : U; fix x2 : U; fix n : Nat\n assume h4 : x1 ∈ A ∧ R x1 n\n assume h5 : x2 ∈ A ∧ R x2 n\n have h6 : R x1 n := h4.right\n have h7 : R x2 n := h5.right\n define at h6 --h6 : S n x1 ∧ ∀ (m : Nat), S m x1 → n ≤ m\n define at h7 --h7 : S n x2 ∧ ∀ (m : Nat), S m x2 → n ≤ m\n show x1 = x2 from h2.left h6.left h7.left\n done\n done\nFinally, for the proof of 3 → 1, suppose we have A : Set U, S : Rel U Nat, and fcnl_one_one_to_nat S A holds. Our plan is to restrict S to elements of A and then show that the inverse of the resulting relation is a matching from some set of natural numbers to A. By ctble_iff_equinum_set_nat, this implies that A is countable.\ndef restrict_to {U V : Type} (S : Rel U V) (A : Set U)\n (x : U) (y : V) : Prop := x ∈ A ∧ S x y\n\ntheorem Theorem_8_1_5_3_to_1 {U : Type} {A : Set U}\n (h1 : ∃ (R : Rel U Nat), fcnl_one_one_to_nat R A) :\n ctble A := by\n obtain (S : Rel U Nat) (h2 : fcnl_one_one_to_nat S A) from h1\n define at h2 --h2 : fcnl_on S A ∧ ∀ ⦃x1 x2 : U⦄ ⦃n : Nat⦄,\n --x1 ∈ A ∧ S x1 n → x2 ∈ A ∧ S x2 n → x1 = x2\n rewrite [ctble_iff_equinum_set_nat] --Goal : ∃ (I : Set Nat), I ∼ A\n set R : Rel Nat U := invRel (restrict_to S A)\n set I : Set Nat := {n : Nat | ∃ (x : U), R n x}\n apply Exists.intro I\n define --Goal : ∃ (R : Rel Nat U), matching R I A\n apply Exists.intro R\n define\n apply And.intro\n · -- Proof of rel_within R I A\n define\n fix n : Nat; fix x : U\n assume h3 : R n x --Goal : n ∈ I ∧ x ∈ A\n apply And.intro\n · -- Proof that n ∈ I\n define --Goal : ∃ (x : U), R n x\n show ∃ (x : U), R n x from Exists.intro x h3\n done\n · -- Proof that x ∈ A\n define at h3 --h3 : x ∈ A ∧ S x n\n show x ∈ A from h3.left\n done\n done\n · -- Proofs of fcnl_ons\n apply And.intro\n · -- Proof of fcnl_on R I\n define\n fix n : Nat\n assume h3 : n ∈ I --Goal : ∃! (y : U), R n y\n exists_unique\n · -- Existence\n define at h3 --h3 : ∃ (x : U), R n x\n show ∃ (y : U), R n y from h3\n done\n · -- Uniqueness\n fix x1 : U; fix x2 : U\n assume h4 : R n x1\n assume h5 : R n x2\n define at h4 --h4 : x1 ∈ A ∧ S x1 n; \n define at h5 --h5 : x2 ∈ A ∧ S x2 n\n show x1 = x2 from h2.right h4 h5\n done\n done\n · -- Proof of fcnl_on (invRel R) A\n define\n fix x : U\n assume h3 : x ∈ A --Goal : ∃! (y : Nat), invRel R x y\n exists_unique\n · -- Existence\n obtain (y : Nat) (h4 : S x y) from fcnl_exists h2.left h3\n apply Exists.intro y\n define\n show x ∈ A ∧ S x y from And.intro h3 h4\n done\n · -- Uniqueness\n fix n1 : Nat; fix n2 : Nat\n assume h4 : invRel R x n1\n assume h5 : invRel R x n2 --Goal : n1 = n2\n define at h4 --h4 : x ∈ A ∧ S x n1\n define at h5 --h5 : x ∈ A ∧ S x n2\n show n1 = n2 from fcnl_unique h2.left h3 h4.right h5.right\n done\n done\n done\n done\nWe now know that statements 1–3 are equivalent, which means that statements 2 and 3 can be thought of as alternative ways to think about countability:\ntheorem Theorem_8_1_5_2 {U : Type} (A : Set U) :\n ctble A ↔ ∃ (R : Rel Nat U), fcnl_onto_from_nat R A := by\n apply Iff.intro\n · -- (→)\n assume h1 : ctble A\n show ∃ (R : Rel Nat U), fcnl_onto_from_nat R A from\n Theorem_8_1_5_1_to_2 h1\n done\n · -- (←)\n assume h1 : ∃ (R : Rel Nat U), fcnl_onto_from_nat R A\n have h2 : ∃ (R : Rel U Nat), fcnl_one_one_to_nat R A :=\n Theorem_8_1_5_2_to_3 h1\n show ctble A from Theorem_8_1_5_3_to_1 h2\n done\n done\n\ntheorem Theorem_8_1_5_3 {U : Type} (A : Set U) :\n ctble A ↔ ∃ (R : Rel U Nat), fcnl_one_one_to_nat R A := sorry\nIn the exercises, we ask you to show that countability of a set can be proven using functions of the kind considered in Theorem 8.1.5 of HTPI.\nWe end this section with a proof of Theorem 8.1.6 in HTPI, which says that the set of rational numbers is denumerable. Our strategy is to define a one-to-one function from Rat (the type of rational numbers) to Nat. We will need to know a little bit about the way rational numbers are represented in Lean. If q has type Rat, then q.num is the numerator of q, which is an integer, and q.den is the denominator, which is a nonzero natural number. The theorem Rat.ext says that if two rational numbers have the same numerator and denominator, then they are equal. And we will also use the theorem Prod.mk.inj, which says that if two ordered pairs are equal, then their first coordinates are equal, as are their second coordinates.\ndef fqn (q : Rat) : Nat := fnnn (fzn q.num, q.den)\n\nlemma fqn_def (q : Rat) : fqn q = fnnn (fzn q.num, q.den) := by rfl\n\nlemma fqn_one_one : one_to_one fqn := by\n define\n fix q1 : Rat; fix q2 : Rat\n assume h1 : fqn q1 = fqn q2\n rewrite [fqn_def, fqn_def] at h1\n --h1 : fnnn (fzn q1.num, q1.den) = fnnn (fzn q2.num, q2.den)\n have h2 : (fzn q1.num, q1.den) = (fzn q2.num, q2.den) :=\n fnnn_one_one _ _ h1\n have h3 : fzn q1.num = fzn q2.num ∧ q1.den = q2.den :=\n Prod.mk.inj h2\n have h4 : q1.num = q2.num := fzn_one_one _ _ h3.left\n show q1 = q2 from Rat.ext q1 q2 h4 h3.right\n done\n\nlemma image_fqn_unbdd :\n ∀ (m : Nat), ∃ n ∈ image fqn (Univ Rat), n ≥ m := by\n fix m : Nat\n set n : Nat := fqn ↑m\n apply Exists.intro n\n apply And.intro\n · -- Proof that n ∈ image fqn (Univ Rat)\n define\n apply Exists.intro ↑m\n apply And.intro (elt_Univ (↑m : Rat))\n rfl\n done\n · -- Proof that n ≥ m\n show n ≥ m from\n calc n\n _ = tri (2 * m + 1) + 2 * m := by rfl\n _ ≥ m := by linarith\n done\n done\n\ntheorem Theorem_8_1_6 : denum (Univ Rat) := by\n set I : Set Nat := image fqn (Univ Rat)\n have h1 : Univ Nat ∼ I := unbdd_subset_nat image_fqn_unbdd\n have h2 : image fqn (Univ Rat) = I := by rfl\n have h3 : Univ Rat ∼ I :=\n equinum_image (one_one_on_of_one_one fqn_one_one (Univ Rat)) h2\n have h4 : I ∼ Univ Rat := Theorem_8_1_3_2 h3\n show denum (Univ Rat) from Theorem_8_1_3_3 h1 h4\n done\n\nExercises\n\n--Hint: Use Exercise_6_1_16a2 from the exercises of Section 6.1\nlemma fnz_odd (k : Nat) : fnz (2 * k + 1) = -↑(k + 1) := sorry\n\n\nlemma fnz_fzn : fnz ∘ fzn = id := sorry\n\n\nlemma tri_step (k : Nat) : tri (k + 1) = tri k + k + 1 := sorry\n\n\nlemma tri_incr {j k : Nat} (h1 : j ≤ k) : tri j ≤ tri k := sorry\n\n\nlemma ctble_of_equinum_ctble {U V : Type} {A : Set U} {B : Set V}\n (h1 : A ∼ B) (h2 : ctble A) : ctble B := sorry\n\n\ntheorem Exercise_8_1_1_b : denum {n : Int | even n} := sorry\n\n\n\n\nThe next four exercises use the following definition:\ndef Rel_image {U V : Type} (R : Rel U V) (A : Set U) : Set V :=\n {y : V | ∃ x ∈ A, R x y}\nNote that if R has type Rel U V, then Rel_image R has type Set U → Set V.\n\nlemma Rel_image_on_power_set {U V : Type} {R : Rel U V}\n {A C : Set U} {B : Set V} (h1 : matching R A B) (h2 : C ∈ 𝒫 A) :\n Rel_image R C ∈ 𝒫 B := sorry\n\n\nlemma Rel_image_inv {U V : Type} {R : Rel U V}\n {A C : Set U} {B : Set V} (h1 : matching R A B) (h2 : C ∈ 𝒫 A) :\n Rel_image (invRel R) (Rel_image R C) = C := sorry\n\n\nlemma Rel_image_one_one_on {U V : Type} {R : Rel U V}\n {A : Set U} {B : Set V} (h1 : matching R A B) :\n one_one_on (Rel_image R) (𝒫 A) := sorry\n\n\nlemma Rel_image_image {U V : Type} {R : Rel U V}\n {A : Set U} {B : Set V} (h1 : matching R A B) :\n image (Rel_image R) (𝒫 A) = 𝒫 B := sorry\n\n\n--Hint: Use the previous two exercises.\ntheorem Exercise_8_1_5 {U V : Type} {A : Set U} {B : Set V}\n (h1 : A ∼ B) : 𝒫 A ∼ 𝒫 B := sorry\n\n\ntheorem Exercise_8_1_17 {U : Type} {A B : Set U}\n (h1 : B ⊆ A) (h2 : ctble A) : ctble B := sorry\n\n\ntheorem ctble_of_onto_func_from_N {U : Type} {A : Set U} {f : Nat → U}\n (h1 : ∀ x ∈ A, ∃ (n : Nat), f n = x) : ctble A := sorry\n\n\ntheorem ctble_of_one_one_func_to_N {U : Type} {A : Set U} {f : U → Nat}\n (h1 : one_one_on f A) : ctble A := sorry" + "text": "8.1. Equinumerous Sets\nChapter 8 of HTPI begins by defining a set \\(A\\) to be equinumerous with a set \\(B\\) if there is a function \\(f : A \\to B\\) that is one-to-one and onto. As we will see, in Lean we will need to phrase this definition somewhat differently. However, we begin by considering some examples of functions that are one-to-one and onto.\nThe first example in HTPI is a one-to-one, onto function from \\(\\mathbb{Z}^+\\) to \\(\\mathbb{Z}\\). We will modify this example slightly to make it a function fnz from Nat to Int:\ndef fnz (n : Nat) : Int := if 2 ∣ n then ↑(n / 2) else -↑((n + 1) / 2)\nNote that, to get a result of type Int, coercion is necessary. We have specified that the coercion should be done after the computation of either n / 2 or (n + 1) / 2, with that computation being done using natural-number arithmetic. Checking a few values of this functions suggests a simple pattern:\n#eval [fnz 0, fnz 1, fnz 2, fnz 3, fnz 4, fnz 5, fnz 6]\n --Answer: [0, -1, 1, -2, 2, -3, 3]\nPerhaps the easiest way to prove that fnz is one-to-one and onto is to define a function that turns out to be its inverse. This time, in order to get the right type for the value of the function, we use the function Int.toNat to convert a nonnegative integer to a natural number.\ndef fzn (a : Int) : Nat :=\n if a ≥ 0 then 2 * Int.toNat a else 2 * Int.toNat (-a) - 1\n\n#eval [fzn 0, fzn (-1), fzn 1, fzn (-2), fzn 2, fzn (-3), fzn 3]\n --Answer: [0, 1, 2, 3, 4, 5, 6]\nTo prove that fzn is the inverse of fnz, we begin by proving lemmas making it easier to compute the values of these functions\nlemma fnz_even (k : Nat) : fnz (2 * k) = ↑k := by\n have h1 : 2 ∣ 2 * k := by\n apply Exists.intro k\n rfl\n done\n have h2 : fnz (2 * k) = if 2 ∣ 2 * k then ↑(2 * k / 2)\n else -↑((2 * k + 1) / 2) := by rfl\n rewrite [if_pos h1] at h2 --h2 : fnz (2 * k) = ↑(2 * k / 2)\n have h3 : 0 < 2 := by linarith\n rewrite [Nat.mul_div_cancel_left k h3] at h2\n show fnz (2 * k) = ↑k from h2\n done\n\nlemma fnz_odd (k : Nat) : fnz (2 * k + 1) = -↑(k + 1) := sorry\n\nlemma fzn_nat (k : Nat) : fzn ↑k = 2 * k := by rfl\n\nlemma fzn_neg_succ_nat (k : Nat) : fzn (-↑(k + 1)) = 2 * k + 1 := by rfl\nUsing these lemmas and reasoning by cases, it is straightforward to prove lemmas confirming that the composition of these functions, in either order, yields the identity function. The cases for the first lemma are based on an exercise from Section 6.1.\nlemma fzn_fnz : fzn ∘ fnz = id := by\n apply funext --Goal : ∀ (x : Nat), (fzn ∘ fnz) x = id x\n fix n : Nat\n rewrite [comp_def] --Goal : fzn (fnz n) = id n\n have h1 : nat_even n ∨ nat_odd n := Exercise_6_1_16a1 n\n by_cases on h1\n · -- Case 1. h1 : nat_even n\n obtain (k : Nat) (h2 : n = 2 * k) from h1\n rewrite [h2, fnz_even, fzn_nat]\n rfl\n done\n · -- Case 2. h1 : nat_odd n\n obtain (k : Nat) (h2 : n = 2 * k + 1) from h1\n rewrite [h2, fnz_odd, fzn_neg_succ_nat]\n rfl\n done\n done\n\nlemma fnz_fzn : fnz ∘ fzn = id := sorry\nBy theorems from Chapter 5, it follows that both fnz and fzn are one-to-one and onto.\nlemma fzn_one_one : one_to_one fzn := Theorem_5_3_3_1 fzn fnz fnz_fzn\n\nlemma fzn_onto : onto fzn := Theorem_5_3_3_2 fzn fnz fzn_fnz\n\nlemma fnz_one_one : one_to_one fnz := Theorem_5_3_3_1 fnz fzn fzn_fnz\n\nlemma fnz_onto : onto fnz := Theorem_5_3_3_2 fnz fzn fnz_fzn\nWe’ll give one more example: a one-to-one, onto function fnnn from Nat × Nat to Nat, whose definition is modeled on a function from \\(\\mathbb{Z}^+ \\times \\mathbb{Z}^+\\) to \\(\\mathbb{Z}^+\\) in HTPI. The definition of fnnn will use numbers of the form k * (k + 1) / 2. These numbers are sometimes called triangular numbers, because they count the number of objects in a triangular grid with k rows.\ndef tri (k : Nat) : Nat := k * (k + 1) / 2\n\ndef fnnn (p : Nat × Nat) : Nat := tri (p.1 + p.2) + p.1\n\nlemma fnnn_def (a b : Nat) : fnnn (a, b) = tri (a + b) + a := by rfl\n\n#eval [fnnn (0, 0), fnnn (0, 1), fnnn (1, 0), fnnn (0, 2), fnnn (1, 1)]\n --Answer: [0, 1, 2, 3, 4]\nTwo simple lemmas about tri will help us prove the important properties of fnnn:\nlemma tri_step (k : Nat) : tri (k + 1) = tri k + k + 1 := sorry\n\nlemma tri_incr {j k : Nat} (h1 : j ≤ k) : tri j ≤ tri k := sorry\n\nlemma le_of_fnnn_eq {a1 b1 a2 b2 : Nat}\n (h1 : fnnn (a1, b1) = fnnn (a2, b2)) : a1 + b1 ≤ a2 + b2 := by\n by_contra h2\n have h3 : a2 + b2 + 1 ≤ a1 + b1 := by linarith\n have h4 : fnnn (a2, b2) < fnnn (a1, b1) :=\n calc fnnn (a2, b2)\n _ = tri (a2 + b2) + a2 := by rfl\n _ < tri (a2 + b2) + (a2 + b2) + 1 := by linarith\n _ = tri (a2 + b2 + 1) := (tri_step _).symm\n _ ≤ tri (a1 + b1) := tri_incr h3\n _ ≤ tri (a1 + b1) + a1 := by linarith\n _ = fnnn (a1, b1) := by rfl\n linarith\n done\n\nlemma fnnn_one_one : one_to_one fnnn := by\n fix (a1, b1) : Nat × Nat\n fix (a2, b2) : Nat × Nat\n assume h1 : fnnn (a1, b1) = fnnn (a2, b2) --Goal : (a1, b1) = (a2, b2)\n have h2 : a1 + b1 ≤ a2 + b2 := le_of_fnnn_eq h1\n have h3 : a2 + b2 ≤ a1 + b1 := le_of_fnnn_eq h1.symm\n have h4 : a1 + b1 = a2 + b2 := by linarith\n rewrite [fnnn_def, fnnn_def, h4] at h1\n --h1 : tri (a2 + b2) + a1 = tri (a2 + b2) + a2\n have h6 : a1 = a2 := Nat.add_left_cancel h1\n rewrite [h6] at h4 --h4 : a2 + b1 = a2 + b2\n have h7 : b1 = b2 := Nat.add_left_cancel h4\n rewrite [h6, h7]\n rfl\n done\n\nlemma fnnn_onto : onto fnnn := by\n define --Goal : ∀ (y : Nat), ∃ (x : Nat × Nat), fnnn x = y\n by_induc\n · -- Base Case\n apply Exists.intro (0, 0)\n rfl\n done\n · -- Induction Step\n fix n : Nat\n assume ih : ∃ (x : Nat × Nat), fnnn x = n\n obtain ((a, b) : Nat × Nat) (h1 : fnnn (a, b) = n) from ih\n by_cases h2 : b = 0\n · -- Case 1. h2 : b = 0\n apply Exists.intro (0, a + 1)\n show fnnn (0, a + 1) = n + 1 from\n calc fnnn (0, a + 1)\n _ = tri (0 + (a + 1)) + 0 := by rfl\n _ = tri (a + 1) := by ring\n _ = tri a + a + 1 := tri_step a\n _ = tri (a + 0) + a + 1 := by ring\n _ = fnnn (a, b) + 1 := by rw [h2, fnnn_def]\n _ = n + 1 := by rw [h1]\n done\n · -- Case 2. h2 : b ≠ 0\n obtain (k : Nat) (h3 : b = k + 1) from\n exists_eq_add_one_of_ne_zero h2\n apply Exists.intro (a + 1, k)\n show fnnn (a + 1, k) = n + 1 from\n calc fnnn (a + 1, k)\n _ = tri (a + 1 + k) + (a + 1) := by rfl\n _ = tri (a + (k + 1)) + a + 1 := by ring\n _ = tri (a + b) + a + 1 := by rw [h3]\n _ = fnnn (a, b) + 1 := by rfl\n _ = n + 1 := by rw [h1]\n done\n done\n done\nDespite these successes with one-to-one, onto functions, we will use a definition of “equinumerous” in Lean that is different from the definition in HTPI. There are two reasons for this change. First of all, the domain of a function in Lean must be a type, but we want to be able to talk about sets being equinumerous. Secondly, Lean expects functions to be computable; it regards the definition of a function as an algorithm for computing the value of the function on any input. This restriction would cause problems with some of our proofs. While there are ways to overcome these difficulties, they would introduce complications that we can avoid by using a different approach.\nSuppose U and V are types, and we have sets A : Set U and B : Set V. We will define A to be equinumerous with B if there is a relation R from U to V that defines a one-to-one correspondence between the elements of A and B. To formulate this precisely, suppose that R has type Rel U V. We will place three requirements on R. First, we require that the relation R should hold only between elements of A and B. We say in this case that R is a relation within A and B:\ndef rel_within {U V : Type} (R : Rel U V) (A : Set U) (B : Set V) : Prop :=\n ∀ ⦃x : U⦄ ⦃y : V⦄, R x y → x ∈ A ∧ y ∈ B\nNotice that in this definition, we have used the same double braces for the quantified variables x and y that were used in the definition of “subset.” This means that x and y are implicit arguments, and therefore if we have h1 : rel_within R A B and h2 : R a b, then h1 h2 is a proof of a ∈ A ∧ b ∈ B. There is no need to specify that a and b are the values to be assigned to x and y; Lean will figure that out for itself. (To type the double braces ⦃ and ⦄, type \\{{ and \\}}. There were cases in previous chapters where it would have been appropriate to use such implicit arguments, but we chose not to do so to avoid confusion. But by now you should be comfortable enough with Lean that you won’t be confused by this new complication.)\nNext, we require that every element of A is related by R to exactly one thing. We say in this case that R is functional on A:\ndef fcnl_on {U V : Type} (R : Rel U V) (A : Set U) : Prop :=\n ∀ ⦃x : U⦄, x ∈ A → ∃! (y : V), R x y\nFinally, we impose the same requirement in the other direction: for every element of B, exactly one thing should be related to it by R. We can express this by saying that the inverse of R is functional on B. In Chapter 4, we defined the inverse of a set of ordered pairs, but we can easily convert this to an operation on relations:\ndef invRel {U V : Type} (R : Rel U V) : Rel V U :=\n RelFromExt (inv (extension R))\n\nlemma invRel_def {U V : Type} (R : Rel U V) (u : U) (v : V) :\n invRel R v u ↔ R u v := by rfl\nWe will call R a matching from A to B if it meets the three requirements above:\ndef matching {U V : Type} (R : Rel U V) (A : Set U) (B : Set V) : Prop :=\n rel_within R A B ∧ fcnl_on R A ∧ fcnl_on (invRel R) B\nFinally, we say that A is equinumerous with B if there is a matching from A to B, and, as in HTPI we introduce the notation A ∼ B to indicate that A is equinumerous with B (to enter the symbol ∼, type \\sim or \\~).\ndef equinum {U V : Type} (A : Set U) (B : Set V) : Prop :=\n ∃ (R : Rel U V), matching R A B\n\nnotation:50 A:50 \" ∼ \" B:50 => equinum A B\nCan the examples at the beginning of this section be used to establish that Int ∼ Nat and Nat × Nat ∼ Nat? Not quite, because Int, Nat, and Nat × Nat are types, not sets. We must talk about the sets of all objects of those types, not the types themselves, so we introduce another definition.\ndef Univ (U : Type) : Set U := {x : U | True}\n\nlemma elt_Univ {U : Type} (u : U) :\n u ∈ Univ U := by trivial\nFor any type U, Univ U is the set of all objects of type U; we might call it the universal set for the type U. Now we can use the functions defined earlier to prove that Univ Int ∼ Univ Nat and Univ (Nat × Nat) ∼ Univ Nat. The do this, we must convert the functions into relations and prove that those relations are matchings. The conversion can be done with the following function.\ndef RelWithinFromFunc {U V : Type} (f : U → V) (A : Set U)\n (x : U) (y : V) : Prop := x ∈ A ∧ f x = y\nThis definition says that if we have f : U → V and A : Set U, then RelWithinFromFunc f A is a relation from U to V that relates any x that is an element of A to f x.\nWe will say that a function is one-to-one on a set A if it satisfies the definition of one-to-one when applied to elements of A:\ndef one_one_on {U V : Type} (f : U → V) (A : Set U) : Prop :=\n ∀ ⦃x1 x2 : U⦄, x1 ∈ A → x2 ∈ A → f x1 = f x2 → x1 = x2\nWith all of this preparation, we can now prove that if f is one-to-one on A, then A is equinumerous with its image under f.\ntheorem equinum_image {U V : Type} {A : Set U} {B : Set V} {f : U → V}\n (h1 : one_one_on f A) (h2 : image f A = B) : A ∼ B := by\n rewrite [←h2]\n define --Goal : ∃ (R : Rel U V), matching R A (image f A)\n set R : Rel U V := RelWithinFromFunc f A\n apply Exists.intro R\n define --Goal : rel_within R A (image f A) ∧\n --fcnl_on R A ∧ fcnl_on (invRel R) (image f A)\n apply And.intro\n · -- Proof of rel_within\n define --Goal : ∀ ⦃x : U⦄ ⦃y : V⦄, R x y → x ∈ A ∧ y ∈ image f A\n fix x : U; fix y : V\n assume h3 : R x y --Goal : x ∈ A ∧ y ∈ image f A\n define at h3 --h3 : x ∈ A ∧ f x = y\n apply And.intro h3.left\n define\n show ∃ x ∈ A, f x = y from Exists.intro x h3\n done\n · -- Proofs of fcnl_ons\n apply And.intro\n · -- Proof of fcnl_on R A\n define --Goal : ∀ ⦃x : U⦄, x ∈ A → ∃! (y : V), R x y\n fix x : U\n assume h3 : x ∈ A\n exists_unique\n · -- Existence\n apply Exists.intro (f x)\n define --Goal : x ∈ A ∧ f x = f x\n apply And.intro h3\n rfl\n done\n · -- Uniqueness\n fix y1 : V; fix y2 : V\n assume h4 : R x y1\n assume h5 : R x y2 --Goal : y1 = y2\n define at h4; define at h5\n --h4 : x ∈ A ∧ f x = y1; h5 : x ∈ A ∧ f x = y2\n rewrite [h4.right] at h5\n show y1 = y2 from h5.right\n done\n done\n · -- Proof of fcnl_on (invRel R) (image f A)\n define --Goal : ∀ ⦃x : V⦄, x ∈ image f A → ∃! (y : U), invRel R x y\n fix y : V\n assume h3 : y ∈ image f A\n obtain (x : U) (h4 : x ∈ A ∧ f x = y) from h3\n exists_unique\n · -- Existence\n apply Exists.intro x\n define\n show x ∈ A ∧ f x = y from h4\n done\n · -- Uniqueness\n fix x1 : U; fix x2 : U\n assume h5 : invRel R y x1\n assume h6 : invRel R y x2\n define at h5; define at h6\n --h5 : x1 ∈ A ∧ f x1 = y; h6 : x2 ∈ A ∧ f x2 = y\n rewrite [←h6.right] at h5\n show x1 = x2 from h1 h5.left h6.left h5.right\n done\n done\n done\n done\nTo apply this result to the functions introduced at the beginning of this section, we will want to use Univ U for the set A in the theorem equinum_image:\nlemma one_one_on_of_one_one {U V : Type} {f : U → V}\n (h : one_to_one f) (A : Set U) : one_one_on f A := sorry\n\ntheorem equinum_Univ {U V : Type} {f : U → V}\n (h1 : one_to_one f) (h2 : onto f) : Univ U ∼ Univ V := by\n have h3 : image f (Univ U) = Univ V := by\n apply Set.ext\n fix v : V\n apply Iff.intro\n · -- (→)\n assume h3 : v ∈ image f (Univ U)\n show v ∈ Univ V from elt_Univ v\n done\n · -- (←)\n assume h3 : v ∈ Univ V\n obtain (u : U) (h4 : f u = v) from h2 v\n apply Exists.intro u\n apply And.intro _ h4\n show u ∈ Univ U from elt_Univ u\n done\n done\n show Univ U ∼ Univ V from\n equinum_image (one_one_on_of_one_one h1 (Univ U)) h3\n done\n\ntheorem Z_equinum_N : Univ Int ∼ Univ Nat :=\n equinum_Univ fzn_one_one fzn_onto\n\ntheorem NxN_equinum_N : Univ (Nat × Nat) ∼ Univ Nat :=\n equinum_Univ fnnn_one_one fnnn_onto\nTheorem 8.1.3 in HTPI shows that ∼ is reflexive, symmetric, and transitive. We’ll prove the three parts of this theorem separately. To prove that ∼ is reflexive, we use the identity function.\nlemma id_one_one_on {U : Type} (A : Set U) : one_one_on id A := sorry\n\nlemma image_id {U : Type} (A : Set U) : image id A = A := sorry\n\ntheorem Theorem_8_1_3_1 {U : Type} (A : Set U) : A ∼ A :=\n equinum_image (id_one_one_on A) (image_id A)\nFor symmetry, we show that the inverse of a matching is also a matching.\nlemma inv_inv {U V : Type} (R : Rel U V) : invRel (invRel R) = R := by rfl\n\nlemma inv_match {U V : Type} {R : Rel U V} {A : Set U} {B : Set V}\n (h : matching R A B) : matching (invRel R) B A := by\n define --Goal : rel_within (invRel R) B A ∧\n --fcnl_on (invRel R) B ∧ fcnl_on (invRel (invRel R)) A\n define at h --h : rel_within R A B ∧ fcnl_on R A ∧ fcnl_on (invRel R) B\n apply And.intro\n · -- Proof that rel_within (invRel R) B A\n define --Goal : ∀ ⦃x : V⦄ ⦃y : U⦄, invRel R x y → x ∈ B ∧ y ∈ A\n fix y : V; fix x : U\n assume h1 : invRel R y x\n define at h1 --h1 : R x y\n have h2 : x ∈ A ∧ y ∈ B := h.left h1\n show y ∈ B ∧ x ∈ A from And.intro h2.right h2.left\n done\n · -- proof that fcnl_on (inv R) B ∧ fcnl_on (inv (inv R)) A\n rewrite [inv_inv]\n show fcnl_on (invRel R) B ∧ fcnl_on R A from\n And.intro h.right.right h.right.left\n done\n done\n\ntheorem Theorem_8_1_3_2 {U V : Type} {A : Set U} {B : Set V}\n (h : A ∼ B) : B ∼ A := by\n obtain (R : Rel U V) (h1 : matching R A B) from h\n apply Exists.intro (invRel R)\n show matching (invRel R) B A from inv_match h1\n done\nThe proof of transitivity is a bit more involved. In this proof, as well as some later proofs, we find it useful to separate out the existence and uniqueness parts of the definition of fcnl_on:\nlemma fcnl_exists {U V : Type} {R : Rel U V} {A : Set U} {x : U}\n (h1 : fcnl_on R A) (h2 : x ∈ A) : ∃ (y : V), R x y := by\n define at h1\n obtain (y : V) (h3 : R x y)\n (h4 : ∀ (y_1 y_2 : V), R x y_1 → R x y_2 → y_1 = y_2) from h1 h2\n show ∃ (y : V), R x y from Exists.intro y h3\n done\n\nlemma fcnl_unique {U V : Type}\n {R : Rel U V} {A : Set U} {x : U} {y1 y2 : V} (h1 : fcnl_on R A)\n (h2 : x ∈ A) (h3 : R x y1) (h4 : R x y2) : y1 = y2 := by\n define at h1\n obtain (z : V) (h5 : R x z)\n (h6 : ∀ (y_1 y_2 : V), R x y_1 → R x y_2 → y_1 = y_2) from h1 h2\n show y1 = y2 from h6 y1 y2 h3 h4\n done\nTo prove transitivity, we will show that a composition of matchings is a matching. Once again we must convert our definition of composition of sets of ordered pairs into an operation on relations. A few preliminary lemmas help with the proof.\ndef compRel {U V W : Type} (S : Rel V W) (R : Rel U V) : Rel U W :=\n RelFromExt (comp (extension S) (extension R))\n\nlemma compRel_def {U V W : Type}\n (S : Rel V W) (R : Rel U V) (u : U) (w : W) :\n compRel S R u w ↔ ∃ (x : V), R u x ∧ S x w := by rfl\n\nlemma inv_comp {U V W : Type} (R : Rel U V) (S : Rel V W) :\n invRel (compRel S R) = compRel (invRel R) (invRel S) := \n calc invRel (compRel S R)\n _ = RelFromExt (inv (comp (extension S) (extension R))) := by rfl\n _ = RelFromExt (comp (inv (extension R)) (inv (extension S))) := by\n rw [Theorem_4_2_5_5]\n _ = compRel (invRel R) (invRel S) := by rfl\n\nlemma comp_fcnl {U V W : Type} {R : Rel U V} {S : Rel V W}\n {A : Set U} {B : Set V} {C : Set W} (h1 : matching R A B)\n (h2 : matching S B C) : fcnl_on (compRel S R) A := by\n define; define at h1; define at h2\n fix a : U\n assume h3 : a ∈ A\n obtain (b : V) (h4 : R a b) from fcnl_exists h1.right.left h3\n have h5 : a ∈ A ∧ b ∈ B := h1.left h4\n obtain (c : W) (h6 : S b c) from fcnl_exists h2.right.left h5.right\n exists_unique\n · -- Existence\n apply Exists.intro c\n rewrite [compRel_def]\n show ∃ (x : V), R a x ∧ S x c from Exists.intro b (And.intro h4 h6)\n done\n · -- Uniqueness\n fix c1 : W; fix c2 : W\n assume h7 : compRel S R a c1\n assume h8 : compRel S R a c2 --Goal : c1 = c2\n rewrite [compRel_def] at h7\n rewrite [compRel_def] at h8\n obtain (b1 : V) (h9 : R a b1 ∧ S b1 c1) from h7\n obtain (b2 : V) (h10 : R a b2 ∧ S b2 c2) from h8\n have h11 : b1 = b := fcnl_unique h1.right.left h3 h9.left h4\n have h12 : b2 = b := fcnl_unique h1.right.left h3 h10.left h4\n rewrite [h11] at h9\n rewrite [h12] at h10\n show c1 = c2 from\n fcnl_unique h2.right.left h5.right h9.right h10.right\n done\n done\n\nlemma comp_match {U V W : Type} {R : Rel U V} {S : Rel V W}\n {A : Set U} {B : Set V} {C : Set W} (h1 : matching R A B)\n (h2 : matching S B C) : matching (compRel S R) A C := by\n define\n apply And.intro\n · -- Proof of rel_within (compRel S R) A C\n define\n fix a : U; fix c : W\n assume h3 : compRel S R a c\n rewrite [compRel_def] at h3\n obtain (b : V) (h4 : R a b ∧ S b c) from h3\n have h5 : a ∈ A ∧ b ∈ B := h1.left h4.left\n have h6 : b ∈ B ∧ c ∈ C := h2.left h4.right\n show a ∈ A ∧ c ∈ C from And.intro h5.left h6.right\n done\n · -- Proof of fcnl_on statements\n apply And.intro\n · -- Proof of fcnl_on (compRel S R) A\n show fcnl_on (compRel S R) A from comp_fcnl h1 h2\n done\n · -- Proof of fcnl_on (invRel (compRel S R)) Z\n rewrite [inv_comp]\n have h3 : matching (invRel R) B A := inv_match h1\n have h4 : matching (invRel S) C B := inv_match h2\n show fcnl_on (compRel (invRel R) (invRel S)) C from comp_fcnl h4 h3\n done\n done\n done\n\ntheorem Theorem_8_1_3_3 {U V W : Type} {A : Set U} {B : Set V} {C : Set W}\n (h1 : A ∼ B) (h2 : B ∼ C) : A ∼ C := by\n obtain (R : Rel U V) (h3 : matching R A B) from h1\n obtain (S : Rel V W) (h4 : matching S B C) from h2\n apply Exists.intro (compRel S R)\n show matching (compRel S R) A C from comp_match h3 h4\n done\nNow that we have a basic understanding of the concept of equinumerous sets, we can use this concept to make a number of definitions. For any natural number \\(n\\), HTPI defines \\(I_n\\) to be the set \\(\\{1, 2, \\ldots, n\\}\\), and then it defines a set to be finite if it is equinumerous with \\(I_n\\), for some \\(n\\). In Lean, it is a bit more convenient to use sets of the form \\(\\{0, 1, \\ldots, n - 1\\}\\). With that small change, we can repeat the definitions of finite, denumerable, and countable in HTPI.\ndef I (n : Nat) : Set Nat := {k : Nat | k < n}\n\nlemma I_def (k n : Nat) : k ∈ I n ↔ k < n := by rfl\n\ndef finite {U : Type} (A : Set U) : Prop :=\n ∃ (n : Nat), I n ∼ A\n\ndef denum {U : Type} (A : Set U) : Prop :=\n Univ Nat ∼ A\n\nlemma denum_def {U : Type} (A : Set U) : denum A ↔ Univ Nat ∼ A := by rfl\n\ndef ctble {U : Type} (A : Set U) : Prop :=\n finite A ∨ denum A\nTheorem 8.1.5 in HTPI gives two useful ways to characterize countable sets. The proof of the theorem in HTPI uses the fact that every set of natural numbers is countable. HTPI gives an intuitive explanation of why this is true, but of course in Lean an intuitive explanation won’t do. So before proving a version of Theorem 8.1.5, we sketch a proof that every set of natural numbers is countable.\nSuppose A has type Set Nat. To prove that A is countable, we will define a relation enum A that “enumerates” the elements of A by relating 0 to the smallest element of A, 1 to the next element of A, 2 to the next, and so on. How do we tell which natural number should be related to any element n of A? Notice that if n is the smallest element of A, then there are 0 elements of A that are smaller than n; if it is second smallest element of A, then there is 1 element of A that is smaller than n; and so on. Thus, enum A should relate a natural number s to n if and only if the number of elements of A that are smaller than n is s. This suggests a plan: First we define a proposition num_elts_below A n s saying that the number of elements of A that are smaller than n is s. Then we use this proposition to define the relation enum A, and finally we show that enum A is a matching that can be used to prove that A is countable.\nThe definition of num_elts_below is recursive. The recursive step relates the number of elements of A below n + 1 to the number of elements below n. There are two possibilities: either n ∈ A and the number of elements below n + 1 is one larger than the number below n, or n ∉ A and the two numbers are the same. (This may remind you of the recursion we used to define num_rp_below in Chapter 7.)\ndef num_elts_below (A : Set Nat) (m s : Nat) : Prop :=\n match m with\n | 0 => s = 0\n | n + 1 => (n ∈ A ∧ 1 ≤ s ∧ num_elts_below A n (s - 1)) ∨\n (n ∉ A ∧ num_elts_below A n s)\n\ndef enum (A : Set Nat) (s n : Nat) : Prop := n ∈ A ∧ num_elts_below A n s\nThe details of the proof that enum A is the required matching are long. We’ll skip them here, but you can find them in the HTPI Lean package.\nlemma neb_exists (A : Set Nat) :\n ∀ (n : Nat), ∃ (s : Nat), num_elts_below A n s := sorry\n\nlemma bdd_subset_nat_match {A : Set Nat} {m s : Nat}\n (h1 : ∀ n ∈ A, n < m) (h2 : num_elts_below A m s) :\n matching (enum A) (I s) A := sorry\n\nlemma bdd_subset_nat {A : Set Nat} {m s : Nat}\n (h1 : ∀ n ∈ A, n < m) (h2 : num_elts_below A m s) :\n I s ∼ A := Exists.intro (enum A) (bdd_subset_nat_match h1 h2)\n\nlemma unbdd_subset_nat_match {A : Set Nat}\n (h1 : ∀ (m : Nat), ∃ n ∈ A, n ≥ m) :\n matching (enum A) (Univ Nat) A := sorry\n\nlemma unbdd_subset_nat {A : Set Nat}\n (h1 : ∀ (m : Nat), ∃ n ∈ A, n ≥ m) :\n denum A := Exists.intro (enum A) (unbdd_subset_nat_match h1)\n\nlemma subset_nat_ctble (A : Set Nat) : ctble A := by\n define --Goal : finite A ∨ denum A\n by_cases h1 : ∃ (m : Nat), ∀ n ∈ A, n < m\n · -- Case 1. h1 : ∃ (m : Nat), ∀ n ∈ A, n < m\n apply Or.inl --Goal : finite A\n obtain (m : Nat) (h2 : ∀ n ∈ A, n < m) from h1\n obtain (s : Nat) (h3 : num_elts_below A m s) from neb_exists A m\n apply Exists.intro s\n show I s ∼ A from bdd_subset_nat h2 h3\n done\n · -- Case 2. h1 : ¬∃ (m : Nat), ∀ n ∈ A, n < m\n apply Or.inr --Goal : denum A\n push_neg at h1\n --This tactic converts h1 to ∀ (m : Nat), ∃ n ∈ A, m ≤ n\n show denum A from unbdd_subset_nat h1\n done\n done\nAs a consequence of our last lemma, we get another characterization of countable sets: a set is countable if and only if it is equinumerous with some subset of the natural numbers:\nlemma ctble_of_equinum_ctble {U V : Type} {A : Set U} {B : Set V}\n (h1 : A ∼ B) (h2 : ctble A) : ctble B := sorry\n\nlemma ctble_iff_equinum_set_nat {U : Type} (A : Set U) : \n ctble A ↔ ∃ (I : Set Nat), I ∼ A := by\n apply Iff.intro\n · -- (→)\n assume h1 : ctble A\n define at h1 --h1 : finite A ∨ denum A\n by_cases on h1\n · -- Case 1. h1 : finite A\n define at h1 --h1 : ∃ (n : Nat), I n ∼ A\n obtain (n : Nat) (h2 : I n ∼ A) from h1\n show ∃ (I : Set Nat), I ∼ A from Exists.intro (I n) h2\n done\n · -- Case 2. h1 : denum A\n rewrite [denum_def] at h1 --h1 : Univ Nat ∼ A\n show ∃ (I : Set Nat), I ∼ A from Exists.intro (Univ Nat) h1\n done\n done\n · -- (←)\n assume h1 : ∃ (I : Set Nat), I ∼ A\n obtain (I : Set Nat) (h2 : I ∼ A) from h1\n have h3 : ctble I := subset_nat_ctble I\n show ctble A from ctble_of_equinum_ctble h2 h3\n done\n done\nWe are now ready to turn to Theorem 8.1.5 in HTPI. The theorem gives two statements that are equivalent to the countability of a set \\(A\\). The first involves a function from the natural numbers to \\(A\\) that is onto. In keeping with our approach in this section, we state a similar characterization involving a relation rather than a function.\ndef unique_val_on_N {U : Type} (R : Rel Nat U) : Prop :=\n ∀ ⦃n : Nat⦄ ⦃x1 x2 : U⦄, R n x1 → R n x2 → x1 = x2\n\ndef nat_rel_onto {U : Type} (R : Rel Nat U) (A : Set U) : Prop :=\n ∀ ⦃x : U⦄, x ∈ A → ∃ (n : Nat), R n x\n\ndef fcnl_onto_from_nat {U : Type} (R : Rel Nat U) (A : Set U) : Prop :=\n unique_val_on_N R ∧ nat_rel_onto R A\nIntuitively, you might think of fcnl_onto_from_nat R A as meaning that the relation R defines a function whose domain is a subset of the natural numbers and whose range contains A.\nThe second characterization of the countability of \\(A\\) in Theorem 8.1.5 involves a function from \\(A\\) to the natural numbers that is one-to-one. Once again, we rephrase this in terms of relations. We define fcnl_one_one_to_nat R A to mean that R defines a function from A to the natural numbers that is one-to-one:\ndef fcnl_one_one_to_nat {U : Type} (R : Rel U Nat) (A : Set U) : Prop :=\n fcnl_on R A ∧ ∀ ⦃x1 x2 : U⦄ ⦃n : Nat⦄,\n (x1 ∈ A ∧ R x1 n) → (x2 ∈ A ∧ R x2 n) → x1 = x2\nOur plan is to prove that if A has type Set U then the following statements are equivalent:\n\nctble A\n∃ (R : Rel Nat U), fcnl_onto_from_nat R A\n∃ (R : Rel U Nat), fcnl_one_one_to_nat R A\n\nAs in HTPI, we will do this by proving 1 → 2 → 3 → 1. Here is the proof of 1 → 2.\ntheorem Theorem_8_1_5_1_to_2 {U : Type} {A : Set U} (h1 : ctble A) :\n ∃ (R : Rel Nat U), fcnl_onto_from_nat R A := by\n rewrite [ctble_iff_equinum_set_nat] at h1\n obtain (I : Set Nat) (h2 : I ∼ A) from h1\n obtain (R : Rel Nat U) (h3 : matching R I A) from h2\n define at h3\n --h3 : rel_within R I A ∧ fcnl_on R I ∧ fcnl_on (invRel R) A\n apply Exists.intro R\n define --Goal : unique_val_on_N R ∧ nat_rel_onto R A\n apply And.intro\n · -- Proof of unique_val_on_N R\n define\n fix n : Nat; fix x1 : U; fix x2 : U\n assume h4 : R n x1\n assume h5 : R n x2 --Goal : x1 = x2\n have h6 : n ∈ I ∧ x1 ∈ A := h3.left h4\n show x1 = x2 from fcnl_unique h3.right.left h6.left h4 h5\n done\n · -- Proof of nat_rel_onto R A\n define\n fix x : U\n assume h4 : x ∈ A --Goal : ∃ (n : Nat), R n x\n show ∃ (n : Nat), R n x from fcnl_exists h3.right.right h4\n done\n done\nFor the proof of 2 → 3, suppose we have A : Set U and S : Rel Nat U, and the statement fcnl_onto_from_nat S A is true. We need to come up with a relation R : Rel U Nat for which we can prove fcnl_one_one_to_nat R A. You might be tempted to try R = invRel S, but there is a problem with this choice: if x ∈ A, there might be multiple natural numbers n such that S n x holds, but we must make sure that there is only one n for which R x n holds. Our solution to this problem will be to define R x n to mean that n is the smallest natural number for which S n x holds. (The proof in HTPI uses a similar idea.) The well-ordering principle guarantees that there always is such a smallest element.\ndef least_rel_to {U : Type} (S : Rel Nat U) (x : U) (n : Nat) : Prop :=\n S n x ∧ ∀ (m : Nat), S m x → n ≤ m\n\nlemma exists_least_rel_to {U : Type} {S : Rel Nat U} {x : U}\n (h1 : ∃ (n : Nat), S n x) : ∃ (n : Nat), least_rel_to S x n := by\n set W : Set Nat := {n : Nat | S n x}\n have h2 : ∃ (n : Nat), n ∈ W := h1\n show ∃ (n : Nat), least_rel_to S x n from well_ord_princ W h2\n done\n\ntheorem Theorem_8_1_5_2_to_3 {U : Type} {A : Set U}\n (h1 : ∃ (R : Rel Nat U), fcnl_onto_from_nat R A) :\n ∃ (R : Rel U Nat), fcnl_one_one_to_nat R A := by\n obtain (S : Rel Nat U) (h2 : fcnl_onto_from_nat S A) from h1\n define at h2 --h2 : unique_val_on_N S ∧ nat_rel_onto S A\n set R : Rel U Nat := least_rel_to S\n apply Exists.intro R\n define\n apply And.intro\n · -- Proof of fcnl_on R A\n define\n fix x : U\n assume h4 : x ∈ A --Goal : ∃! (y : Nat), R x y\n exists_unique\n · -- Existence\n have h5 : ∃ (n : Nat), S n x := h2.right h4\n show ∃ (n : Nat), R x n from exists_least_rel_to h5\n done\n · -- Uniqueness\n fix n1 : Nat; fix n2 : Nat\n assume h5 : R x n1\n assume h6 : R x n2 --Goal : n1 = n2\n define at h5 --h5 : S n1 x ∧ ∀ (m : Nat), S m x → n1 ≤ m\n define at h6 --h6 : S n2 x ∧ ∀ (m : Nat), S m x → n2 ≤ m\n have h7 : n1 ≤ n2 := h5.right n2 h6.left\n have h8 : n2 ≤ n1 := h6.right n1 h5.left\n linarith\n done\n done\n · -- Proof of one-to-one\n fix x1 : U; fix x2 : U; fix n : Nat\n assume h4 : x1 ∈ A ∧ R x1 n\n assume h5 : x2 ∈ A ∧ R x2 n\n have h6 : R x1 n := h4.right\n have h7 : R x2 n := h5.right\n define at h6 --h6 : S n x1 ∧ ∀ (m : Nat), S m x1 → n ≤ m\n define at h7 --h7 : S n x2 ∧ ∀ (m : Nat), S m x2 → n ≤ m\n show x1 = x2 from h2.left h6.left h7.left\n done\n done\nFinally, for the proof of 3 → 1, suppose we have A : Set U, S : Rel U Nat, and fcnl_one_one_to_nat S A holds. Our plan is to restrict S to elements of A and then show that the inverse of the resulting relation is a matching from some set of natural numbers to A. By ctble_iff_equinum_set_nat, this implies that A is countable.\ndef restrict_to {U V : Type} (S : Rel U V) (A : Set U)\n (x : U) (y : V) : Prop := x ∈ A ∧ S x y\n\ntheorem Theorem_8_1_5_3_to_1 {U : Type} {A : Set U}\n (h1 : ∃ (R : Rel U Nat), fcnl_one_one_to_nat R A) :\n ctble A := by\n obtain (S : Rel U Nat) (h2 : fcnl_one_one_to_nat S A) from h1\n define at h2 --h2 : fcnl_on S A ∧ ∀ ⦃x1 x2 : U⦄ ⦃n : Nat⦄,\n --x1 ∈ A ∧ S x1 n → x2 ∈ A ∧ S x2 n → x1 = x2\n rewrite [ctble_iff_equinum_set_nat] --Goal : ∃ (I : Set Nat), I ∼ A\n set R : Rel Nat U := invRel (restrict_to S A)\n set I : Set Nat := {n : Nat | ∃ (x : U), R n x}\n apply Exists.intro I\n define --Goal : ∃ (R : Rel Nat U), matching R I A\n apply Exists.intro R\n define\n apply And.intro\n · -- Proof of rel_within R I A\n define\n fix n : Nat; fix x : U\n assume h3 : R n x --Goal : n ∈ I ∧ x ∈ A\n apply And.intro\n · -- Proof that n ∈ I\n define --Goal : ∃ (x : U), R n x\n show ∃ (x : U), R n x from Exists.intro x h3\n done\n · -- Proof that x ∈ A\n define at h3 --h3 : x ∈ A ∧ S x n\n show x ∈ A from h3.left\n done\n done\n · -- Proofs of fcnl_ons\n apply And.intro\n · -- Proof of fcnl_on R I\n define\n fix n : Nat\n assume h3 : n ∈ I --Goal : ∃! (y : U), R n y\n exists_unique\n · -- Existence\n define at h3 --h3 : ∃ (x : U), R n x\n show ∃ (y : U), R n y from h3\n done\n · -- Uniqueness\n fix x1 : U; fix x2 : U\n assume h4 : R n x1\n assume h5 : R n x2\n define at h4 --h4 : x1 ∈ A ∧ S x1 n; \n define at h5 --h5 : x2 ∈ A ∧ S x2 n\n show x1 = x2 from h2.right h4 h5\n done\n done\n · -- Proof of fcnl_on (invRel R) A\n define\n fix x : U\n assume h3 : x ∈ A --Goal : ∃! (y : Nat), invRel R x y\n exists_unique\n · -- Existence\n obtain (y : Nat) (h4 : S x y) from fcnl_exists h2.left h3\n apply Exists.intro y\n define\n show x ∈ A ∧ S x y from And.intro h3 h4\n done\n · -- Uniqueness\n fix n1 : Nat; fix n2 : Nat\n assume h4 : invRel R x n1\n assume h5 : invRel R x n2 --Goal : n1 = n2\n define at h4 --h4 : x ∈ A ∧ S x n1\n define at h5 --h5 : x ∈ A ∧ S x n2\n show n1 = n2 from fcnl_unique h2.left h3 h4.right h5.right\n done\n done\n done\n done\nWe now know that statements 1–3 are equivalent, which means that statements 2 and 3 can be thought of as alternative ways to think about countability:\ntheorem Theorem_8_1_5_2 {U : Type} (A : Set U) :\n ctble A ↔ ∃ (R : Rel Nat U), fcnl_onto_from_nat R A := by\n apply Iff.intro\n · -- (→)\n assume h1 : ctble A\n show ∃ (R : Rel Nat U), fcnl_onto_from_nat R A from\n Theorem_8_1_5_1_to_2 h1\n done\n · -- (←)\n assume h1 : ∃ (R : Rel Nat U), fcnl_onto_from_nat R A\n have h2 : ∃ (R : Rel U Nat), fcnl_one_one_to_nat R A :=\n Theorem_8_1_5_2_to_3 h1\n show ctble A from Theorem_8_1_5_3_to_1 h2\n done\n done\n\ntheorem Theorem_8_1_5_3 {U : Type} (A : Set U) :\n ctble A ↔ ∃ (R : Rel U Nat), fcnl_one_one_to_nat R A := sorry\nIn the exercises, we ask you to show that countability of a set can be proven using functions of the kind considered in Theorem 8.1.5 of HTPI.\nWe end this section with a proof of Theorem 8.1.6 in HTPI, which says that the set of rational numbers is denumerable. Our strategy is to define a one-to-one function from Rat (the type of rational numbers) to Nat. We will need to know a little bit about the way rational numbers are represented in Lean. If q has type Rat, then q.num is the numerator of q, which is an integer, and q.den is the denominator, which is a nonzero natural number. The theorem Rat.ext says that if two rational numbers have the same numerator and denominator, then they are equal. And we will also use the theorem Prod.mk.inj, which says that if two ordered pairs are equal, then their first coordinates are equal, as are their second coordinates.\ndef fqn (q : Rat) : Nat := fnnn (fzn q.num, q.den)\n\nlemma fqn_def (q : Rat) : fqn q = fnnn (fzn q.num, q.den) := by rfl\n\nlemma fqn_one_one : one_to_one fqn := by\n define\n fix q1 : Rat; fix q2 : Rat\n assume h1 : fqn q1 = fqn q2\n rewrite [fqn_def, fqn_def] at h1\n --h1 : fnnn (fzn q1.num, q1.den) = fnnn (fzn q2.num, q2.den)\n have h2 : (fzn q1.num, q1.den) = (fzn q2.num, q2.den) :=\n fnnn_one_one _ _ h1\n have h3 : fzn q1.num = fzn q2.num ∧ q1.den = q2.den :=\n Prod.mk.inj h2\n have h4 : q1.num = q2.num := fzn_one_one _ _ h3.left\n show q1 = q2 from Rat.ext h4 h3.right\n done\n\nlemma image_fqn_unbdd :\n ∀ (m : Nat), ∃ n ∈ image fqn (Univ Rat), n ≥ m := by\n fix m : Nat\n set n : Nat := fqn ↑m\n apply Exists.intro n\n apply And.intro\n · -- Proof that n ∈ image fqn (Univ Rat)\n define\n apply Exists.intro ↑m\n apply And.intro (elt_Univ (↑m : Rat))\n rfl\n done\n · -- Proof that n ≥ m\n show n ≥ m from\n calc n\n _ = tri (2 * m + 1) + 2 * m := by rfl\n _ ≥ m := by linarith\n done\n done\n\ntheorem Theorem_8_1_6 : denum (Univ Rat) := by\n set I : Set Nat := image fqn (Univ Rat)\n have h1 : Univ Nat ∼ I := unbdd_subset_nat image_fqn_unbdd\n have h2 : image fqn (Univ Rat) = I := by rfl\n have h3 : Univ Rat ∼ I :=\n equinum_image (one_one_on_of_one_one fqn_one_one (Univ Rat)) h2\n have h4 : I ∼ Univ Rat := Theorem_8_1_3_2 h3\n show denum (Univ Rat) from Theorem_8_1_3_3 h1 h4\n done\n\nExercises\n\n--Hint: Use Exercise_6_1_16a2 from the exercises of Section 6.1\nlemma fnz_odd (k : Nat) : fnz (2 * k + 1) = -↑(k + 1) := sorry\n\n\nlemma fnz_fzn : fnz ∘ fzn = id := sorry\n\n\nlemma tri_step (k : Nat) : tri (k + 1) = tri k + k + 1 := sorry\n\n\nlemma tri_incr {j k : Nat} (h1 : j ≤ k) : tri j ≤ tri k := sorry\n\n\nlemma ctble_of_equinum_ctble {U V : Type} {A : Set U} {B : Set V}\n (h1 : A ∼ B) (h2 : ctble A) : ctble B := sorry\n\n\ntheorem Exercise_8_1_1_b : denum {n : Int | even n} := sorry\n\n\n\n\nThe next four exercises use the following definition:\ndef Rel_image {U V : Type} (R : Rel U V) (A : Set U) : Set V :=\n {y : V | ∃ x ∈ A, R x y}\nNote that if R has type Rel U V, then Rel_image R has type Set U → Set V.\n\nlemma Rel_image_on_power_set {U V : Type} {R : Rel U V}\n {A C : Set U} {B : Set V} (h1 : matching R A B) (h2 : C ∈ 𝒫 A) :\n Rel_image R C ∈ 𝒫 B := sorry\n\n\nlemma Rel_image_inv {U V : Type} {R : Rel U V}\n {A C : Set U} {B : Set V} (h1 : matching R A B) (h2 : C ∈ 𝒫 A) :\n Rel_image (invRel R) (Rel_image R C) = C := sorry\n\n\nlemma Rel_image_one_one_on {U V : Type} {R : Rel U V}\n {A : Set U} {B : Set V} (h1 : matching R A B) :\n one_one_on (Rel_image R) (𝒫 A) := sorry\n\n\nlemma Rel_image_image {U V : Type} {R : Rel U V}\n {A : Set U} {B : Set V} (h1 : matching R A B) :\n image (Rel_image R) (𝒫 A) = 𝒫 B := sorry\n\n\n--Hint: Use the previous two exercises.\ntheorem Exercise_8_1_5 {U V : Type} {A : Set U} {B : Set V}\n (h1 : A ∼ B) : 𝒫 A ∼ 𝒫 B := sorry\n\n\ntheorem Exercise_8_1_17 {U : Type} {A B : Set U}\n (h1 : B ⊆ A) (h2 : ctble A) : ctble B := sorry\n\n\ntheorem ctble_of_onto_func_from_N {U : Type} {A : Set U} {f : Nat → U}\n (h1 : ∀ x ∈ A, ∃ (n : Nat), f n = x) : ctble A := sorry\n\n\ntheorem ctble_of_one_one_func_to_N {U : Type} {A : Set U} {f : U → Nat}\n (h1 : one_one_on f A) : ctble A := sorry" }, { "objectID": "Chap8.html#debts-paid", @@ -361,7 +361,7 @@ "href": "Appendix.html", "title": "Appendix", "section": "", - "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\bigtriangleup}\n$$" + "text": "$$\n\\newcommand{\\setmin}{\\mathbin{\\backslash}}\n\\newcommand{\\symmdiff}{\\mathbin{∆}}\n$$" }, { "objectID": "Appendix.html#tactics-used", @@ -382,6 +382,6 @@ "href": "Appendix.html#typing-symbols", "title": "Appendix", "section": "Typing Symbols", - "text": "Typing Symbols\n\n\n\n\nSymbol\nHow To Type It\n\n\n\n\n¬\n\\not or \\n\n\n\n∧\n\\and\n\n\n∨\n\\or or \\v\n\n\n→\n\\to or \\r or \\imp\n\n\n↔︎\n\\iff or \\lr\n\n\n∀\n\\forall or \\all\n\n\n∃\n\\exists or \\ex\n\n\n⦃\n\\{{\n\n\n⦄\n\\}}\n\n\n=\n=\n\n\n≠\n\\ne\n\n\n∈\n\\in\n\n\n∉\n\\notin or \\inn\n\n\n⊆\n\\sub\n\n\n⊈\n\\subn\n\n\n∪\n\\union or \\cup\n\n\n∩\n\\inter or \\cap\n\n\n⋃₀\n\\U0\n\n\n⋂₀\n\\I0\n\n\n\\\n\\\\\n\n\n△\n\\bigtriangleup\n\n\n∅\n\\emptyset\n\n\n𝒫\n\\powerset\n\n\n·\n\\.\n\n\n←\n\\leftarrow or \\l\n\n\n↑\n\\uparrow or \\u\n\n\nℕ\n\\N\n\n\nℤ\n\\Z\n\n\nℚ\n\\Q\n\n\nℝ\n\\R\n\n\nℂ\n\\C\n\n\n≤\n\\le\n\n\n≥\n\\ge\n\n\n∣\n\\|\n\n\n×\n\\times or \\x\n\n\n∘\n\\comp or \\circ\n\n\n≡\n\\==\n\n\n∼\n\\sim or \\~\n\n\nₛ\n\\_s\n\n\nᵣ\n\\_r\n\n\n⟨\n\\<\n\n\n⟩\n\\>" + "text": "Typing Symbols\n\n\n\n\nSymbol\nHow To Type It\n\n\n\n\n¬\n\\not or \\n\n\n\n∧\n\\and\n\n\n∨\n\\or or \\v\n\n\n→\n\\to or \\r or \\imp\n\n\n↔︎\n\\iff or \\lr\n\n\n∀\n\\forall or \\all\n\n\n∃\n\\exists or \\ex\n\n\n⦃\n\\{{\n\n\n⦄\n\\}}\n\n\n=\n=\n\n\n≠\n\\ne\n\n\n∈\n\\in\n\n\n∉\n\\notin or \\inn\n\n\n⊆\n\\sub\n\n\n⊈\n\\subn\n\n\n∪\n\\union or \\cup\n\n\n∩\n\\inter or \\cap\n\n\n⋃₀\n\\U0\n\n\n⋂₀\n\\I0\n\n\n\\\n\\\\\n\n\n∆\n\\symmdiff\n\n\n∅\n\\emptyset\n\n\n𝒫\n\\powerset\n\n\n·\n\\.\n\n\n←\n\\leftarrow or \\l\n\n\n↑\n\\uparrow or \\u\n\n\nℕ\n\\N\n\n\nℤ\n\\Z\n\n\nℚ\n\\Q\n\n\nℝ\n\\R\n\n\nℂ\n\\C\n\n\n≤\n\\le\n\n\n≥\n\\ge\n\n\n∣\n\\|\n\n\n×\n\\times or \\x\n\n\n∘\n\\comp or \\circ\n\n\n≡\n\\==\n\n\n∼\n\\sim or \\~\n\n\nₛ\n\\_s\n\n\nᵣ\n\\_r\n\n\n⟨\n\\<\n\n\n⟩\n\\>" } ] \ No newline at end of file diff --git a/inpreamble.tex b/inpreamble.tex index 06d8ba0..6279fb2 100644 --- a/inpreamble.tex +++ b/inpreamble.tex @@ -99,7 +99,7 @@ \newcommand{\incl}[1]{#1} \newcommand{\setmin}{\mathbin{\backslash}} -\newcommand{\symmdiff}{\bigtriangleup} +\newcommand{\symmdiff}{\mathbin{∆}} \pagenumbering{roman} %So front matter uses roman numerals. Switch back to arabic at beginning of preface. \publishers{\longcopyrightnotice} \ No newline at end of file diff --git a/mathjaxdefs.tex b/mathjaxdefs.tex index 426d908..90094cc 100644 --- a/mathjaxdefs.tex +++ b/mathjaxdefs.tex @@ -1,6 +1,6 @@ \ No newline at end of file