Tim Gowers, Ben Green, Freddie Manners, and I have actually simply submitted to the arXiv our paper “On a conjecture of Marton“. This paper develops a variation of the well-known Polynomial Freiman–Ruzsa conjecture (very first proposed by Katalin Marton):.
Theorem 1 (Polynomial Freiman– Ruzsa guesswork) Let
be such that
{Then
can be covered by at a lot of
translates of a subspace
of
of cardinality at a lot of
.|
can be covered by at a lot of translates of a subspace
of
of cardinality at a lot of
.}
The previous best understood outcome towards this guesswork was by Konyagin (as interacted in this paper of Sanders), who got a comparable outcome however with changed by
for any
(presuming that state survey of Green to prevent some degeneracies as of Lovett techniques of Green and myself, which is not the tough case of the guesswork). The guesswork (with
changed by an undefined continuous
) has a variety of comparable kinds; see this , and these documents
and
for some examples; in specific, as gone over in the latter 2 recommendations, the constants in the inverted
theorem are now polynomial in nature (although we did not attempt to enhance the continuous).
The exponent
here was the item of a a great deal of optimizations to the argument (our initial exponent here was closer to ), however can be enhanced even further with extra effort (our existing argument, for example, enables one to change it with
, however we chose to mention our outcome utilizing integer exponents rather).
In this paper we will focus solely on the particular case (so we will be cavalier in determining addition and subtraction), however in a followup paper we will develop comparable lead to other limited qualities.
Much of the previous development on this sort of outcome has actually continued through Fourier analysis. Possibly remarkably, our technique utilizes no Fourier analysis whatsoever, being carried out rather totally in “physical area”. Broadly speaking, it follows a natural technique, which is to induct on the doubling continuous {Undoubtedly, expect for example that a person might reveal that every set of doubling continuous
was “commensurate” in some sense to a set
of doubling continuous at a lot of
.|Expect for circumstances that one might reveal that every set
of doubling continuous
was “commensurate” in some sense to a set
of doubling continuous at a lot of .} One step of commensurability, for example, may be the
Ruzsa range, which one may intend to manage by
{Then one might repeat this treatment till doubling continuous dropped listed below say
, at which point the guesswork is understood to hold (there is a primary argument that if has doubling continuous less than
, then remains in truth a subspace of
).|One might repeat this treatment till doubling continuous dropped listed below say
, at which point the guesswork is understood to hold (there is a primary argument that if
has doubling continuous less than
, then
is in truth a subspace of
).} One can then utilize a number of applications of the
Ruzsa triangle inequality
to conclude (the truth that we lower
to indicates that the numerous Ruzsa ranges that require to be summed are managed by a convergent geometric series).
There are a variety of possible methods to attempt to “enhance” a set of not too big doubling by changing it with a commensurate set of much better doubling. We keep in mind 2 specific possible enhancements:.
{Regrettably, there are sets where neither of the above 2 operations (i), (ii) substantially enhances the doubling constant.|There are sets
where neither of the above 2 operations (i), (ii) substantially enhances the doubling constant.} {For example, if
is a random density
subset of
random translates of a medium-sized subspace
, one can inspect that the doubling continuous stays near to
if one uses either operation (i) or operation (ii).|If
is a random density
subset of
random translates of a medium-sized subspace
, one can inspect that the doubling continuous stays close to
if one uses either operation (i) or operation (ii).} {However in this case these operations do not in fact get worse the doubling continuous much either, and by using some mix of (i) and (ii) (either converging
with an equate, or taking a sumset of
with itself) one can begin decreasing the doubling continuous once again.|In this case these operations do not in fact get worse the doubling continuous much either, and by using some mix of (i) and (ii) (either converging
with an equate, or taking a sumset of
with itself) one can begin decreasing the doubling continuous once again.}
This starts to recommend a prospective technique: reveal that a minimum of among the operations (i) or (ii) will enhance the doubling continuous, or a minimum of not aggravate it excessive; and in the latter case, carry out some more complex operation to find the preferred doubling continuous enhancement.
An indication that this technique may have an opportunity of working is supplied by the following heuristic argument. {If has doubling continuous this recent paper of myself with Green and Manners, then the Cartesian item
has doubling continuous
.|The Cartesian item
has doubling continuous
if
has doubling continuous
.} On the other hand, by utilizing the forecast map
specified by
, we see that
jobs to
, with fibers
being basically a copy of {So, ethically,
likewise acts like a “alter item” of and the fibers
, which recommends (non-rigorously) that the doubling continuous
of
is likewise something like the doubling constant of
, times the doubling constant of a common fiber
.|Ethically, likewise acts like a “alter item” of
and the fibers
, which recommends (non-rigorously) that the doubling continuous
of
is likewise something like the doubling constant of , times the doubling constant of a common fiber
.} This would suggest that a minimum of among
and
would have doubling continuous at a lot of , and hence that a minimum of among operations (i), (ii) would not get worse the doubling constant.
{Regrettably, this argument does not appear to be quickly made extensive utilizing the conventional doubling continuous; even simply revealing the substantially weaker declaration that has doubling continuous at a lot of
is uncertain (and possibly even incorrect).|This argument does not appear to be quickly made extensive utilizing the conventional doubling continuous; even simply revealing the substantially weaker declaration that
has doubling continuous at a lot of
is uncertain (and possibly even incorrect).} {Nevertheless, it ends up (as gone over in
) that things are far better.|It turns out (as gone over in
) that things are much better.} Here, the analogue of a subset
in
is a random variable
taking worths in
, and the analogue of the (logarithmic) doubling continuous is the entropic doubling continuous
, where
are independent copies of
If
is a random variable in some additive group and
is a homomorphism, one then has what we call the fibring inequality
where the conditional doubling continuous
is specified asold paper of mine
From the chain guideline for Shannon entropy one has Lean4.
and
changed by 2 independent copies (*) of itself, and utilizing the addition map (*) for (*), we acquire in specific that(*)
As soon as one repairs (*)) (*), or (given that (*) is figured out by (*).
If (*), then at least one of (*) or (*) will be less than or equivalent to (*). This is the entropy analogue of a minimum of among (i) or (ii) improving, or a minimum of not deteriorating the doubling continuous, although there are some small technicalities including how one handle the conditioning to (*) in the 2nd term (*) that we will gloss over here (one can pigeonhole the circumstances of (*) to various occasions (*), (*), and “depolarise” the induction hypothesis to handle ranges (*) in between sets of random variables (*) that do not always have the exact same circulation). We can even compute the problem in the above inequality: a mindful evaluation of the above argument ultimately exposes that (*).
where we now take 4 independent copies(*) This leads (modulo some technicalities) to the following fascinating conclusion: if neither (i) nor (ii) results in an enhancement in the entropic doubling continuous, then (*) and (*) are conditionally independent relative to(*) This scenario (or an approximation to this scenario) is what we describe in the paper as the “endgame”.
(*)
A variation of this endgame conclusion remains in truth legitimate in any particular. In particular (*), we can take benefit of the identity (*).
Conditioning on (*), and utilizing balance we now conclude that if we remain in the endgame precisely (so that the shared info is no), then the independent amount of 2 copies of (*) has precisely the exact same circulation; in specific, the entropic doubling continuous here is no, which is definitely a decrease in the doubling constant.
(*)
To handle the scenario where the conditional shared info is little however not totally no, we need to utilize an entropic variation of the Balog-Szemeredi-Gowers lemma, however thankfully this was currently exercised in an (*) (although in order to optimise the last continuous, we wound up utilizing a small version of that lemma).
(*)
I am preparing to formalize this paper in the evidence assistant language (*); more information to follow.
(*) Like this: (*) Like(*) Loading …(*)