Misplaced Pages

Schnirelmann density

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Mann's theorem) In additive number theory, a way to measure how dense a sequence of numbers is

In additive number theory, the Schnirelmann density of a sequence of numbers is a way to measure how "dense" the sequence is. It is named after Russian mathematician Lev Schnirelmann, who was the first to study it.

Definition

The Schnirelmann density of a set of natural numbers A is defined as

σ A = inf n A ( n ) n , {\displaystyle \sigma A=\inf _{n}{\frac {A(n)}{n}},}

where A(n) denotes the number of elements of A not exceeding n and inf is infimum.

The Schnirelmann density is well-defined even if the limit of A(n)/n as n → ∞ fails to exist (see upper and lower asymptotic density).

Properties

By definition, 0 ≤ A(n) ≤ n and n σAA(n) for all n, and therefore 0 ≤ σA ≤ 1, and σA = 1 if and only if A = N. Furthermore,

σ A = 0 ϵ > 0   n   A ( n ) < ϵ n . {\displaystyle \sigma A=0\Rightarrow \forall \epsilon >0\ \exists n\ A(n)<\epsilon n.}

Sensitivity

The Schnirelmann density is sensitive to the first values of a set:

k   k A σ A 1 1 / k {\displaystyle \forall k\ k\notin A\Rightarrow \sigma A\leq 1-1/k} .

In particular,

1 A σ A = 0 {\displaystyle 1\notin A\Rightarrow \sigma A=0}

and

2 A σ A 1 2 . {\displaystyle 2\notin A\Rightarrow \sigma A\leq {\frac {1}{2}}.}

Consequently, the Schnirelmann densities of the even numbers and the odd numbers, which one might expect to agree, are 0 and 1/2 respectively. Schnirelmann and Yuri Linnik exploited this sensitivity.

Schnirelmann's theorems

If we set G 2 = { k 2 } k = 1 {\displaystyle {\mathfrak {G}}^{2}=\{k^{2}\}_{k=1}^{\infty }} , then Lagrange's four-square theorem can be restated as σ ( G 2 G 2 G 2 G 2 ) = 1 {\displaystyle \sigma ({\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2})=1} . (Here the symbol A B {\displaystyle A\oplus B} denotes the sumset of A { 0 } {\displaystyle A\cup \{0\}} and B { 0 } {\displaystyle B\cup \{0\}} .) It is clear that σ G 2 = 0 {\displaystyle \sigma {\mathfrak {G}}^{2}=0} . In fact, we still have σ ( G 2 G 2 ) = 0 {\displaystyle \sigma ({\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2})=0} , and one might ask at what point the sumset attains Schnirelmann density 1 and how does it increase. It actually is the case that σ ( G 2 G 2 G 2 ) = 5 / 6 {\displaystyle \sigma ({\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2})=5/6} and one sees that sumsetting G 2 {\displaystyle {\mathfrak {G}}^{2}} once again yields a more populous set, namely all of N {\displaystyle \mathbb {N} } . Schnirelmann further succeeded in developing these ideas into the following theorems, aiming towards Additive Number Theory, and proving them to be a novel resource (if not greatly powerful) to attack important problems, such as Waring's problem and Goldbach's conjecture.

Theorem. Let A {\displaystyle A} and B {\displaystyle B} be subsets of N {\displaystyle \mathbb {N} } . Then

σ ( A B ) σ A + σ B σ A σ B . {\displaystyle \sigma (A\oplus B)\geq \sigma A+\sigma B-\sigma A\cdot \sigma B.}

Note that σ A + σ B σ A σ B = 1 ( 1 σ A ) ( 1 σ B ) {\displaystyle \sigma A+\sigma B-\sigma A\cdot \sigma B=1-(1-\sigma A)(1-\sigma B)} . Inductively, we have the following generalization.

Corollary. Let A i N {\displaystyle A_{i}\subseteq \mathbb {N} } be a finite family of subsets of N {\displaystyle \mathbb {N} } . Then

σ ( i A i ) 1 i ( 1 σ A i ) . {\displaystyle \sigma \left(\bigoplus _{i}A_{i}\right)\geq 1-\prod _{i}\left(1-\sigma A_{i}\right).}

The theorem provides the first insights on how sumsets accumulate. It seems unfortunate that its conclusion stops short of showing σ {\displaystyle \sigma } being superadditive. Yet, Schnirelmann provided us with the following results, which sufficed for most of his purpose.

Theorem. Let A {\displaystyle A} and B {\displaystyle B} be subsets of N {\displaystyle \mathbb {N} } . If σ A + σ B 1 {\displaystyle \sigma A+\sigma B\geq 1} , then

A B = N . {\displaystyle A\oplus B=\mathbb {N} .}

Theorem. (Schnirelmann) Let A N {\displaystyle A\subseteq \mathbb {N} } . If σ A > 0 {\displaystyle \sigma A>0} then there exists k {\displaystyle k} such that

i = 1 k A = N . {\displaystyle \bigoplus _{i=1}^{k}A=\mathbb {N} .}

Additive bases

A subset A N {\displaystyle A\subseteq \mathbb {N} } with the property that A A A = N {\displaystyle A\oplus A\oplus \cdots \oplus A=\mathbb {N} } for a finite sum, is called an additive basis, and the least number of summands required is called the degree (sometimes order) of the basis. Thus, the last theorem states that any set with positive Schnirelmann density is an additive basis. In this terminology, the set of squares G 2 = { k 2 } k = 1 {\displaystyle {\mathfrak {G}}^{2}=\{k^{2}\}_{k=1}^{\infty }} is an additive basis of degree 4. (About an open problem for additive bases, see Erdős–Turán conjecture on additive bases.)

Mann's theorem

Historically the theorems above were pointers to the following result, at one time known as the α + β {\displaystyle \alpha +\beta } hypothesis. It was used by Edmund Landau and was finally proved by Henry Mann in 1942.

Theorem. (Mann 1942) Let A {\displaystyle A} and B {\displaystyle B} be subsets of N {\displaystyle \mathbb {N} } . In case that A B N {\displaystyle A\oplus B\neq \mathbb {N} } , we still have

σ ( A B ) σ A + σ B . {\displaystyle \sigma (A\oplus B)\geq \sigma A+\sigma B.}

An analogue of this theorem for lower asymptotic density was obtained by Kneser. At a later date, E. Artin and P. Scherk simplified the proof of Mann's theorem.

Waring's problem

Main article: Waring's problem

Let k {\displaystyle k} and N {\displaystyle N} be natural numbers. Let G k = { i k } i = 1 {\displaystyle {\mathfrak {G}}^{k}=\{i^{k}\}_{i=1}^{\infty }} . Define r N k ( n ) {\displaystyle r_{N}^{k}(n)} to be the number of non-negative integral solutions to the equation

x 1 k + x 2 k + + x N k = n {\displaystyle x_{1}^{k}+x_{2}^{k}+\cdots +x_{N}^{k}=n}

and R N k ( n ) {\displaystyle R_{N}^{k}(n)} to be the number of non-negative integral solutions to the inequality

0 x 1 k + x 2 k + + x N k n , {\displaystyle 0\leq x_{1}^{k}+x_{2}^{k}+\cdots +x_{N}^{k}\leq n,}

in the variables x i {\displaystyle x_{i}} , respectively. Thus R N k ( n ) = i = 0 n r N k ( i ) {\displaystyle R_{N}^{k}(n)=\sum _{i=0}^{n}r_{N}^{k}(i)} . We have

  • r N k ( n ) > 0 n N G k , {\displaystyle r_{N}^{k}(n)>0\leftrightarrow n\in N{\mathfrak {G}}^{k},}
  • R N k ( n ) ( n N ) N k . {\displaystyle R_{N}^{k}(n)\geq \left({\frac {n}{N}}\right)^{\frac {N}{k}}.}

The volume of the N {\displaystyle N} -dimensional body defined by 0 x 1 k + x 2 k + + x N k n {\displaystyle 0\leq x_{1}^{k}+x_{2}^{k}+\cdots +x_{N}^{k}\leq n} , is bounded by the volume of the hypercube of size n 1 / k {\displaystyle n^{1/k}} , hence R N k ( n ) = i = 0 n r N k ( i ) n N / k {\displaystyle R_{N}^{k}(n)=\sum _{i=0}^{n}r_{N}^{k}(i)\leq n^{N/k}} . The hard part is to show that this bound still works on the average, i.e.,

Lemma. (Linnik) For all k N {\displaystyle k\in \mathbb {N} } there exists N N {\displaystyle N\in \mathbb {N} } and a constant c = c ( k ) {\displaystyle c=c(k)} , depending only on k {\displaystyle k} , such that for all n N {\displaystyle n\in \mathbb {N} } ,

r N k ( m ) < c n N k 1 {\displaystyle r_{N}^{k}(m)<cn^{{\frac {N}{k}}-1}}

for all 0 m n . {\displaystyle 0\leq m\leq n.}

With this at hand, the following theorem can be elegantly proved.

Theorem. For all k {\displaystyle k} there exists N {\displaystyle N} for which σ ( N G k ) > 0 {\displaystyle \sigma (N{\mathfrak {G}}^{k})>0} .

We have thus established the general solution to Waring's Problem:

Corollary. (Hilbert 1909) For all k {\displaystyle k} there exists N {\displaystyle N} , depending only on k {\displaystyle k} , such that every positive integer n {\displaystyle n} can be expressed as the sum of at most N {\displaystyle N} many k {\displaystyle k} -th powers.

Schnirelmann's constant

In 1930 Schnirelmann used these ideas in conjunction with the Brun sieve to prove Schnirelmann's theorem, that any natural number greater than 1 can be written as the sum of not more than C prime numbers, where C is an effectively computable constant: Schnirelmann obtained C < 800000. Schnirelmann's constant is the lowest number C with this property.

Olivier Ramaré showed in (Ramaré 1995) that Schnirelmann's constant is at most 7, improving the earlier upper bound of 19 obtained by Hans Riesel and R. C. Vaughan.

Schnirelmann's constant is at least 3; Goldbach's conjecture implies that this is the constant's actual value.

In 2013, Harald Helfgott proved Goldbach's weak conjecture for all odd numbers. Therefore, Schnirelmann's constant is at most 4.

Essential components

Khintchin proved that the sequence of squares, though of zero Schnirelmann density, when added to a sequence of Schnirelmann density between 0 and 1, increases the density:

σ ( A + G 2 ) > σ ( A )  for  0 < σ ( A ) < 1. {\displaystyle \sigma (A+{\mathfrak {G}}^{2})>\sigma (A){\text{ for }}0<\sigma (A)<1.}

This was soon simplified and extended by Erdős, who showed, that if A is any sequence with Schnirelmann density α and B is an additive basis of order k then

σ ( A + B ) α + α ( 1 α ) 2 k , {\displaystyle \sigma (A+B)\geq \alpha +{\frac {\alpha (1-\alpha )}{2k}}\,,}

and this was improved by Plünnecke to

σ ( A + B ) α 1 1 k   . {\displaystyle \sigma (A+B)\geq \alpha ^{1-{\frac {1}{k}}}\ .}

Sequences with this property, of increasing density less than one by addition, were named essential components by Khintchin. Linnik showed that an essential component need not be an additive basis as he constructed an essential component that has x elements less than x. More precisely, the sequence has

e ( log x ) c {\displaystyle e^{(\log x)^{c}}}

elements less than x for some c < 1. This was improved by E. Wirsing to

e log x log log x . {\displaystyle e^{{\sqrt {\log x}}\log \log x}.}

For a while, it remained an open problem how many elements an essential component must have. Finally, Ruzsa determined that for every ε > 0 there is an essential component which has at most c(log x) elements up to x, but there is no essential component which has c(log x) elements up to x.

References

  1. ^ Schnirelmann, L.G. (1930). "On the additive properties of numbers", first published in "Proceedings of the Don Polytechnic Institute in Novocherkassk" (in Russian), vol XIV (1930), pp. 3-27, and reprinted in "Uspekhi Matematicheskikh Nauk" (in Russian), 1939, no. 6, 9–25.
  2. ^ Schnirelmann, L.G. (1933). First published as "Über additive Eigenschaften von Zahlen" in "Mathematische Annalen" (in German), vol 107 (1933), 649-690, and reprinted as "On the additive properties of numbers" in "Uspekhin. Matematicheskikh Nauk" (in Russian), 1940, no. 7, 7–46.
  3. Nathanson (1996) pp.191–192
  4. Nathanson (1990) p.397
  5. E. Artin and P. Scherk (1943) On the sums of two sets of integers, Ann. of Math 44, page=138-142.
  6. ^ Nathanson (1996) p.208
  7. Gelfond & Linnik (1966) p.136
  8. Helfgott, Harald A. (2013). "Major arcs for Goldbach's theorem". arXiv:1305.2897 .
  9. Helfgott, Harald A. (2012). "Minor arcs for Goldbach's problem". arXiv:1205.5252 .
  10. Helfgott, Harald A. (2013). "The ternary Goldbach conjecture is true". arXiv:1312.7748 .
  11. Helfgoot, Harald A. (2015). "The ternary Goldbach problem". arXiv:1501.05438 .
  12. Ruzsa (2009) p.177
  13. Ruzsa (2009) p.179
  14. Linnik, Yu. V. (1942). "On Erdõs's theorem on the addition of numerical sequences". Mat. Sb. 10: 67–78. Zbl 0063.03574.
  15. Imre Z. Ruzsa, Essential Components, Proceedings of the London Mathematical Society, Volume s3-54, Issue 1, January 1987, Pages 38–56, https://doi.org/10.1112/plms/s3-54.1.38 01 January 1987
  16. Ruzsa (2009) p.184
Categories: