In my previous post, I strolled through the job of officially deducing one lemma from another inLean 4 The reduction was intentionally selected to be brief and just showcased a little number of Lean strategies. Here I want to stroll through the procedure I utilized for a somewhat longer evidence I exercised just recently, after seeing the following challenge from Damek Davis: to formalize (in a civilized style) the evidence of the following lemma:

LemmaLet and be series of genuine numbers indexed by natural numbers , with non-increasing and non-negative. Expect likewise that for all {Then for all .|

for all Blueprint.}

Here I attempted to bring into play the lessons I had actually gained from the PFR formalization job, and to initially establish a human understandable evidence of the lemma before beginning the Lean formalization– a lower-case “plan” instead of the fancier utilized in the PFR job. The essence of the evidence here is to utilize the telescoping series identity Since

is non-negative, and by hypothesis, we have however by the monotone hypothesis on

the left-hand side is at least , offering the claim. This is currently a human-readable evidence, however in order to formalize it more quickly in Lean, I chose to reword it as a chain of inequalities, beginning at

and ending at

With a bit of pen and paper effort, I acquired

( by field identities)

( by the formula for summing a consistent)

( by the monotone hypothesis)

( by the hypothesis ( by telescoping series)

( by the non-negativity of online Lean playground).

I chose that this was a sufficient plan for me to deal with. The next action is to formalize the declaration of the lemma in Lean. For this fast job, it was practical to utilize the , instead of my regional IDE, so the screenshots will look a little various from those in the previous post. (If you like, you can follow this trip because play ground, by clicking the screenshots of the Lean code.) I begin by importing Lean’s mathematics library, and beginning an example of a declaration to state and show: Now we need to state the variables and hypotheses. The primary variables here are the series ` and `

, which in Lean are best designed by functions ` a`

, D` from the natural numbers ℕ to the reals ℝ. (One can select to "hardwire" the non-negativity hypothesis into the `

by making D`NNReal`

take worths in the nonnegative reals

(represented ` in Lean), however this ends up being troublesome, since the laws of algebra and summation that we will require are clunkier on the non-negative reals (which are not even a group) than on the reals (which are a field). {So we include the variables: `

Now we include the hypotheses, which in Lean convention are generally offered names beginning with `Antitone`

h

|We include in the variables: ` Now we include in the hypotheses, which in Lean convention are generally offered names beginning with `

h`} This is relatively uncomplicated; the something is that the home of being monotone reducing currently has a name in Lean's Mathlib, particularly `

, and it is normally a great concept to utilize the Mathlib offered terms (since that library includes a great deal of beneficial lemmas about such terms).` One thing to keep in mind here is that Lean is rather proficient at filling out indicated varieties of variables. Due To The Fact That `

a* and * D` have the natural numbers ℕ as their domain, the dummy variable `

k` in these hypotheses is instantly being measured over ℕ. We `

might` have actually made this metrology specific if we so selected, for example utilizing `

∀ k: ℕ, 0 ≤ D k` rather of `

∀ k, 0 ≤ D k`, however it is not required to do so. {Likewise note that Lean does not need parentheses when using functions: we compose `

D k

here instead of

D( k)` (which in reality does not assemble in Lean unless one puts an area in between the `

D` and the parentheses).|Note that Lean does not need parentheses when using functions: we compose `

D k` here rather than `

D( k)` (which in reality does not assemble in Lean unless one puts an area in between the `

D

and the parentheses).} This is a little various from basic mathematical notation, however is not too hard to get utilized to.` This appears like completion of the hypotheses, so we might now include a colon to relocate to the conclusion, and after that include that conclusion: `

This is a completely great Lean declaration. {However it ends up that when showing a generally measured declaration such as

∀ k, a k ≤ D 0/ (k + 1)`, the initial step is often to open the quantifier to present the variable `

k

(utilizing the Lean command `calc`

introduction k`).|It turns out that when showing a generally measured declaration such as `

∀ k, a k ≤ D 0/ (k + 1)`, the very first action is practically constantly to open up the quantifier to present the variable `

k

(utilizing the Lean command ` introduction k`

).} Due to the fact that of this, it is a little more effective to conceal the universal quantifier by putting the variable `Finset`

k` in the hypotheses, instead of in the quantifier (in which case we need to now define that it is a natural number, as Lean can no longer deduce this from context): `

At this point Lean is suffering an unforeseen end of input: the example has actually been mentioned, however not shown. We will briefly mollify Lean by including a `range`

sorry` as the supposed evidence: `

Now Lean is material, aside from offering a caution (as suggested by the yellow squiggle under the example`) that the evidence includes a sorry.`

It is now time to follow the plan. The Lean strategy for showing an inequality through chains of other inequalities is referred to as`BigOperators`

We utilize the plan to fill out the ` calc`

that we desire, leaving the reasons of each action as “` sorry`

” s in the meantime:

Here, we “

open`" ed the `

namespace in order to quickly gain access to ` Finset`

‘s

function, with `field_simp`

variety k`ring`

generally being the limited set of natural numbers `, and likewise "`

open

`" ed the `

namespace to access the familiar ∑ notation for (limited) summation, in order to make the actions in the Lean code look like the plan as much as possible. One might prevent opening these namespaces, however then expressions such as ` ∑ i in variety (k +1), a i`

would rather need to be composed as something like

Finset.sum (Finset.range (k +1)) (enjoyable i ↦ a i)

, which looks a lot less like like basic mathematical writing. The evidence structure here might advise some readers of the “2 column evidence” that are rather popular in American high school geometry classes.` Now we have 6 sorries to fill. Browsing to the very first sorry, Lean informs us the ambient hypotheses, and the objective that we require to show to fill that sorry: `

The ⊢ sign here is Lean’s marker for the objective. The uparrows ↑ are browbeating signs, showing that the natural number `congr`

k` needs to be transformed to a genuine number in order to connect through math operations with other genuine numbers such as `

a k

, however we can overlook these browbeatings for this trip (for this evidence, it ends up Lean will generally handle them instantly without requirement for any specific intervention by a human). ` The objective here is a self-evident algebraic identity; it includes department, so one needs to inspect that the denominator is non-zero, however this is self-evident. In Lean, a practical method to develop algebraic identities is to utilize the strategy `

to clear denominators, and after that ` to validate any identity that stands for commutative rings. This works, and clears the very first `

sorry`: `

field_simp`Finset.sum_const`

, by the method, is clever enough to deduce by itself that the denominator ` k +1`

here is manifestly non-zero (and in reality favorable); no human intervention is needed to point this out. For other “cleaning denominator” actions that we will experience in the other parts of the evidence.` Now we browse to the next 'sorry'. Lean informs us the objectives and hypotheses: `

We can minimize the objective by counteracting the common measure `Finset.sum_const`

↑ k +1` Here we can utilize the convenient Lean strategy `

, which attempts to match 2 sides of an equality objective as much as possible, and leave any staying disparities in between the 2 sides as more objectives to be shown. Using

congr

`, the objective decreases to`

Here one may envision that this is something that a person can show by induction. This specific sort of identity– summing a consistent over a limited set– is currently covered by Mathlib. {Undoubtedly, looking for `gcongr`

Finset`, `

amount`, and `

const` quickly leads us to the `

lemma here.|Browsing for ` Finset`

, ` amount`

, and

const` quickly leads us to the `

lemma here.} {However there is a much more practical course to take here, which is to use the effective strategy ` simp`

, which attempts to streamline the objective as much as possible utilizing all the “documentation for Antitone simp

lemmas” Mathlib needs to use (of which ` is an example, however there are countless others).|There is an even more practical course to take here, which is to use the effective strategy `

simp`, which attempts to streamline the objective as much as possible utilizing all the "`

simp` lemmas" Mathlib has to use (of which `

is an example, however there are thousands of others).} As it ends up, ` simp`

totally exterminates this identity, with no more human intervention: ` Now we proceed to the next sorry, and take a look at our objective: `

congr` does not work here since we have an inequality rather of an equality, however there is an effective relative `

of

congr` that is completely fit for inequalities. It can likewise open amounts, integrals, and items, lowering international inequalities in between such amounts into pointwise inequalities. If we conjure up `

gcongr with i hi` (where we inform `

gcongr` to utilize `

i` for the variable opened, and `

hi` for the restraint this variable will please), we come to a significantly streamlined objective (and a brand-new ambient variable and hypothesis): `

Now we require to utilize the monotonicity hypothesis on

a`linarith`

, which we have actually called previous tour ha

here. Taking a look at the `, one discovers a lemma that looks suitable here: `

One can use this lemma in this case by composing ` use Antitone.imp ha`

, however since

ha` is currently of type `

Antitone

, we can abbreviate this to ` use ha.imp`

(Actually, as suggested in the paperwork, due to the method ` Antitone`

is specified, we can even simply utilize ` use ha`

here.) This decreases the objective perfectly: locate the right tool The objective is now really near to the hypothesis ` hi`

One might now search for the paperwork for previous tour Finset.range

to see how to unload `sum_range_sub'`

hi`, however as in the past `

simp`apply`

can do this for us. {Conjuring Up

simp at hi`, we get`

Now the objective and hypothesis are really close undoubtedly.|Conjuring Up ` simp at hi`

, we get

Now the objective and hypothesis are really close.} Here we can simply close the objective utilizing the ` strategy utilized in the `

: ` The next sorry can be fixed by comparable techniques, utilizing the hypothesis `

hD` used at the variable `

i`: `

Now for the penultimate sorry. As in a previous action, we can utilize

congr` to eliminate the denominator, leaving us in this state: `

This is a telescoping series identity. One might attempt to show it by induction, or one might attempt to see if this identity is currently in Mathlib. Searching for ` Finset`

,

amountprevious tour, and

sub

will ` (as the 5th hit), however an easier method to continue here is to utilize the `

specific?

strategy we saw in the `: `

A quick check of the paperwork for ` verifies that this is what we desire. Really we can simply utilize `

use sum_range_sub’` here, as the `

strategy is clever enough to fill out the missing out on arguments:

One last (*) sorry(*) to go. As in the past, we utilize (*) gcongr(*) to cancel denominators, leaving us with(*) This looks simple, since the hypothesis (*) hpos(*) will inform us that (*) D (k +1)(*) is nonnegative; particularly, the circumstances (*) hpos (k +1)(*) of that hypothesis will specify precisely this. The (*) linarith(*) strategy will then fix this objective once it is outlined this specific circumstances: (*) We now have a total evidence– say goodbye to yellow squiggly line in the example. There are 2 cautions however– there are 2 variables (*) i(*) and (*) hi(*) presented in the evidence that Lean’s “linter” has actually seen are not really utilized in the evidence. We can relabel them with highlights to inform Lean that we are fine with them not being utilized: (*) This is a completely great evidence, however upon seeing that numerous of the actions are comparable to each other, one can do a bit of “code golf” as in the (*) to compactify the evidence a bit: (*) With enough familiarity with the Lean language, this evidence really tracks rather carefully with (an enhanced variation of) the human plan.(*) This concludes the trip of a lengthier Lean showing workout. I am discovering the pre-planning action of the evidence (utilizing a casual “plan” to break the evidence down into very granular pieces) to make the formalization procedure substantially much easier than in the past (when I typically embraced a consecutive procedure of composing one line of code at a time without very first strategizing a skeleton of the argument). (The evidence here took just about 15 minutes to develop at first, although for this post I needed to recreate it with screenshots and supporting links, which took substantially more time.) I think that a practical near-term objective for AI is to be able to fill out instantly a substantial portion of the sorts of atomic “(*) sorry(*)” s of the size one saw in this evidence, permitting one to transform a plan to an official Lean evidence a lot more quickly.(*) One last remark: in this trip I filled out the “(*) sorry(*)” s in the order in which they appeared, however there is really no requirement that a person does this, and when one has actually utilized a plan to atomize an evidence into self-contained smaller sized pieces, one can fill them in in any order. Notably for a group job, these micro-tasks can be parallelized, with various factors declaring whichever “(*) sorry(*)” they feel they are certified to resolve, and working individually of each other. (And, since Lean can instantly validate if their evidence is proper, there is no requirement to have a pre-existing bond of trust with these factors in order to accept their contributions.) Due to the fact that the requirements of a “(*) sorry(*)” somebody can make a significant contribution to the evidence by working on an incredibly localized part of it without requiring the mathematical competence to comprehend the international argument. This is not especially crucial in this easy case, where the whole lemma is not too difficult to comprehend to a skilled mathematician, however can end up being rather pertinent for intricate formalization jobs.(*)