Home Maths A slightly longer Lean 4 proof tour

A slightly longer Lean 4 proof tour

0
4


In my previous post, I strolled through the job of officially deducing one lemma from another inLean 4 The reduction was intentionally selected to be brief and just showcased a little number of Lean strategies. Here I want to stroll through the procedure I utilized for a somewhat longer evidence I exercised just recently, after seeing the following challenge from Damek Davis: to formalize (in a civilized style) the evidence of the following lemma:

Lemma Let {a_k} and {D_k} be series of genuine numbers indexed by natural numbers k=0,1,dots, with a_k non-increasing and D_k non-negative. Expect likewise that a_k leq D_k - D_{k+1} for allk geq 0 {Then a_k leq frac{D_0}{k+1} for all k.|

for all Blueprint.}

displaystyle sum_{i=0}^k D_i - D_{i+1} = D_0 - D_{k+1}.

Here I attempted to bring into play the lessons I had actually gained from the PFR formalization job, and to initially establish a human understandable evidence of the lemma before beginning the Lean formalization– a lower-case “plan” instead of the fancier D_{k+1} utilized in the PFR job. The essence of the evidence here is to utilize the telescoping series identitya_i leq D_i - D_{i+1} Since

displaystyle sum_{i=0}^k a_i leq D_0

is non-negative, and a_i by hypothesis, we have(k+1) a_k however by the monotone hypothesis on

the left-hand side is at least a_k, offering the claim.D_0 / (k+1) This is currently a human-readable evidence, however in order to formalize it more quickly in Lean, I chose to reword it as a chain of inequalities, beginning at

a_k = (k+1) a_k / (k+1)

and ending at

= (sum_{i=0}^k a_k) / (k+1)

With a bit of pen and paper effort, I acquired

leq (sum_{i=0}^k a_i) / (k+1)

( by field identities)

leq (sum_{i=0}^k D_i - D_{i+1}) / (k+1)

( by the formula for summing a consistent)a_i leq D_i - D_{i+1}

= (D_0 - D_{k+1}) / (k+1)

( by the monotone hypothesis)

leq D_0 / (k+1)

( by the hypothesis D_{k+1}( by telescoping series)

( by the non-negativity of online Lean playground).

ARa8RcYpet5CAAAAAElFTkSuQmCC

image 26

I chose that this was a sufficient plan for me to deal with. The next action is to formalize the declaration of the lemma in Lean. For this fast job, it was practical to utilize the a_k, instead of my regional IDE, so the screenshots will look a little various from those in the previous post. (If you like, you can follow this trip because play ground, by clicking the screenshots of the Lean code.) I begin by importing Lean’s mathematics library, and beginning an example of a declaration to state and show: D_k Now we need to state the variables and hypotheses. The primary variables here are the series and , which in Lean are best designed by functions a, D_k D from the natural numbers ℕ to the reals ℝ. (One can select to "hardwire" the non-negativity hypothesis into the by making {bf R}^+ DNNReal take worths in the nonnegative reals

image

(represented in Lean), however this ends up being troublesome, since the laws of algebra and summation that we will require are clunkier on the non-negative reals (which are not even a group) than on the reals (which are a field). {So we include the variables: Now we include the hypotheses, which in Lean convention are generally offered names beginning with Antitone h

image 27

|We include in the variables: Now we include in the hypotheses, which in Lean convention are generally offered names beginning with h} This is relatively uncomplicated; the something is that the home of being monotone reducing currently has a name in Lean's Mathlib, particularly , and it is normally a great concept to utilize the Mathlib offered terms (since that library includes a great deal of beneficial lemmas about such terms). One thing to keep in mind here is that Lean is rather proficient at filling out indicated varieties of variables. Due To The Fact That a and D have the natural numbers ℕ as their domain, the dummy variable k in these hypotheses is instantly being measured over ℕ. We might have actually made this metrology specific if we so selected, for example utilizing ∀ k: ℕ, 0 ≤ D k rather of ∀ k, 0 ≤ D k, however it is not required to do so. {Likewise note that Lean does not need parentheses when using functions: we compose D k

here instead of

image 1

D( k) (which in reality does not assemble in Lean unless one puts an area in between the D and the parentheses).|Note that Lean does not need parentheses when using functions: we compose D k here rather than D( k) (which in reality does not assemble in Lean unless one puts an area in between the D

image 2

and the parentheses).} This is a little various from basic mathematical notation, however is not too hard to get utilized to. This appears like completion of the hypotheses, so we might now include a colon to relocate to the conclusion, and after that include that conclusion: This is a completely great Lean declaration. {However it ends up that when showing a generally measured declaration such as

image 3

∀ k, a k ≤ D 0/ (k + 1), the initial step is often to open the quantifier to present the variable k

(utilizing the Lean command calc introduction k).|It turns out that when showing a generally measured declaration such as ∀ k, a k ≤ D 0/ (k + 1), the very first action is practically constantly to open up the quantifier to present the variable k

image 6

(utilizing the Lean command introduction k).} Due to the fact that of this, it is a little more effective to conceal the universal quantifier by putting the variable Finset k in the hypotheses, instead of in the quantifier (in which case we need to now define that it is a natural number, as Lean can no longer deduce this from context): At this point Lean is suffering an unforeseen end of input: the example has actually been mentioned, however not shown. We will briefly mollify Lean by including a range sorry as the supposed evidence: Now Lean is material, aside from offering a caution (as suggested by the yellow squiggle under the {0,dots,k-1} example) that the evidence includes a sorry. It is now time to follow the plan. The Lean strategy for showing an inequality through chains of other inequalities is referred to asBigOperators We utilize the plan to fill out the calc that we desire, leaving the reasons of each action as “ sorry” s in the meantime:

Here, we “

image 5

open" ed the namespace in order to quickly gain access to Finset‘s

function, with field_simp variety kring generally being the limited set of natural numbers , and likewise " open

image 7

" ed the namespace to access the familiar ∑ notation for (limited) summation, in order to make the actions in the Lean code look like the plan as much as possible. One might prevent opening these namespaces, however then expressions such as ∑ i in variety (k +1), a i would rather need to be composed as something like

Finset.sum (Finset.range (k +1)) (enjoyable i ↦ a i)

image 8

, which looks a lot less like like basic mathematical writing. The evidence structure here might advise some readers of the “2 column evidence” that are rather popular in American high school geometry classes. Now we have 6 sorries to fill. Browsing to the very first sorry, Lean informs us the ambient hypotheses, and the objective that we require to show to fill that sorry: The ⊢ sign here is Lean’s marker for the objective. The uparrows ↑ are browbeating signs, showing that the natural number congr k needs to be transformed to a genuine number in order to connect through math operations with other genuine numbers such as a k

image 9

, however we can overlook these browbeatings for this trip (for this evidence, it ends up Lean will generally handle them instantly without requirement for any specific intervention by a human). The objective here is a self-evident algebraic identity; it includes department, so one needs to inspect that the denominator is non-zero, however this is self-evident. In Lean, a practical method to develop algebraic identities is to utilize the strategy to clear denominators, and after that to validate any identity that stands for commutative rings. This works, and clears the very first sorry: field_simpFinset.sum_const, by the method, is clever enough to deduce by itself that the denominator k +1 here is manifestly non-zero (and in reality favorable); no human intervention is needed to point this out. For other “cleaning denominator” actions that we will experience in the other parts of the evidence. Now we browse to the next 'sorry'. Lean informs us the objectives and hypotheses: We can minimize the objective by counteracting the common measure Finset.sum_const ↑ k +1 Here we can utilize the convenient Lean strategy , which attempts to match 2 sides of an equality objective as much as possible, and leave any staying disparities in between the 2 sides as more objectives to be shown. Using

image 10

congr

image 11

, the objective decreases to Here one may envision that this is something that a person can show by induction. This specific sort of identity– summing a consistent over a limited set– is currently covered by Mathlib. {Undoubtedly, looking for gcongr Finset, amount, and const quickly leads us to the lemma here.|Browsing for Finset, amount, and

image 12

const quickly leads us to the lemma here.} {However there is a much more practical course to take here, which is to use the effective strategy simp, which attempts to streamline the objective as much as possible utilizing all the “documentation for Antitone simp

image 13

lemmas” Mathlib needs to use (of which is an example, however there are countless others).|There is an even more practical course to take here, which is to use the effective strategy simp, which attempts to streamline the objective as much as possible utilizing all the " simp lemmas" Mathlib has to use (of which is an example, however there are thousands of others).} As it ends up, simp totally exterminates this identity, with no more human intervention: Now we proceed to the next sorry, and take a look at our objective: congr does not work here since we have an inequality rather of an equality, however there is an effective relative of

image 14

congr that is completely fit for inequalities. It can likewise open amounts, integrals, and items, lowering international inequalities in between such amounts into pointwise inequalities. If we conjure up gcongr with i hi (where we inform gcongr to utilize i for the variable opened, and hi for the restraint this variable will please), we come to a significantly streamlined objective (and a brand-new ambient variable and hypothesis): Now we require to utilize the monotonicity hypothesis on

image 15

alinarith, which we have actually called previous tour ha

image 16

here. Taking a look at the , one discovers a lemma that looks suitable here: One can use this lemma in this case by composing use Antitone.imp ha, however since

image 17

ha is currently of type Antitone

image 18

, we can abbreviate this to use ha.imp (Actually, as suggested in the paperwork, due to the method Antitone is specified, we can even simply utilize use ha here.) This decreases the objective perfectly: locate the right tool The objective is now really near to the hypothesis hi One might now search for the paperwork for previous tour Finset.range

image 19

to see how to unload sum_range_sub' hi, however as in the past simpapply can do this for us. {Conjuring Up

image 20

simp at hi, we get Now the objective and hypothesis are really close undoubtedly.|Conjuring Up simp at hi, we get

image 21

Now the objective and hypothesis are really close.} Here we can simply close the objective utilizing the strategy utilized in the : The next sorry can be fixed by comparable techniques, utilizing the hypothesis hD used at the variable i: Now for the penultimate sorry. As in a previous action, we can utilize

image 22

congr to eliminate the denominator, leaving us in this state: This is a telescoping series identity. One might attempt to show it by induction, or one might attempt to see if this identity is currently in Mathlib. Searching for Finset,

image 23

amountprevious tour, and

image 25

sub

will (as the 5th hit), however an easier method to continue here is to utilize the specific?

strategy we saw in the : A quick check of the paperwork for verifies that this is what we desire. Really we can simply utilize use sum_range_sub’ here, as the strategy is clever enough to fill out the missing out on arguments:



One last (*) sorry(*) to go. As in the past, we utilize (*) gcongr(*) to cancel denominators, leaving us with(*) This looks simple, since the hypothesis (*) hpos(*) will inform us that (*) D (k +1)(*) is nonnegative; particularly, the circumstances (*) hpos (k +1)(*) of that hypothesis will specify precisely this. The (*) linarith(*) strategy will then fix this objective once it is outlined this specific circumstances: (*) We now have a total evidence– say goodbye to yellow squiggly line in the example. There are 2 cautions however– there are 2 variables (*) i(*) and (*) hi(*) presented in the evidence that Lean’s “linter” has actually seen are not really utilized in the evidence. We can relabel them with highlights to inform Lean that we are fine with them not being utilized: (*) This is a completely great evidence, however upon seeing that numerous of the actions are comparable to each other, one can do a bit of “code golf” as in the (*) to compactify the evidence a bit: (*) With enough familiarity with the Lean language, this evidence really tracks rather carefully with (an enhanced variation of) the human plan.(*) This concludes the trip of a lengthier Lean showing workout. I am discovering the pre-planning action of the evidence (utilizing a casual “plan” to break the evidence down into very granular pieces) to make the formalization procedure substantially much easier than in the past (when I typically embraced a consecutive procedure of composing one line of code at a time without very first strategizing a skeleton of the argument). (The evidence here took just about 15 minutes to develop at first, although for this post I needed to recreate it with screenshots and supporting links, which took substantially more time.) I think that a practical near-term objective for AI is to be able to fill out instantly a substantial portion of the sorts of atomic “(*) sorry(*)” s of the size one saw in this evidence, permitting one to transform a plan to an official Lean evidence a lot more quickly.(*) One last remark: in this trip I filled out the “(*) sorry(*)” s in the order in which they appeared, however there is really no requirement that a person does this, and when one has actually utilized a plan to atomize an evidence into self-contained smaller sized pieces, one can fill them in in any order. Notably for a group job, these micro-tasks can be parallelized, with various factors declaring whichever “(*) sorry(*)” they feel they are certified to resolve, and working individually of each other. (And, since Lean can instantly validate if their evidence is proper, there is no requirement to have a pre-existing bond of trust with these factors in order to accept their contributions.) Due to the fact that the requirements of a “(*) sorry(*)” somebody can make a significant contribution to the evidence by working on an incredibly localized part of it without requiring the mathematical competence to comprehend the international argument. This is not especially crucial in this easy case, where the whole lemma is not too difficult to comprehend to a skilled mathematician, however can end up being rather pertinent for intricate formalization jobs.(*)

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here