Posts Tagged ‘math.CA’

The extremal utility belt: Cauchy-Schwarz

March 10, 2010

I found this little gem in an old post on Terry Tao’s blog. There’s not really enough content in it to merit an entire Extremal Toolbox post, but it’s too cool not to point out.

Theorem. Graphs of order n and girth at least 5 have o(n^2) edges.

Proof. Suppose that G has 1/2*c*n^2 edges. Define the function A: V^2 \rightarrow \{0, 1\} to be the “adjacency characteristic function.” Now, by Cauchy-Schwarz:

n^4 \Sigma A(x_1, y_1)A(x_1, y_2)A(x_2, y_1)A(x_2, y_2)

\geq (n \Sigma A(x, y_1) A(x, y_2))^2

\geq (\Sigma A(x, y))^4 = \frac{1}{16} c^4 n^8.

Clearly for c fixed, \frac{1}{16} c^4 n^8 is unbounded as n \rightarrow \infty. But the first expression is n^4 times the number of (possibly degenerate) 4-cycles in the graph; it’s easy to check that there are O(n^3) degenerate 4-cycles, so as n is unbounded our graph must contain a 4-cycle. QED

Now, it’s possible to get better bounds by cleverly doing “surgery” on the graph and just using pigeonhole. (See here for details.) But it’s tricky and far less beautiful than the argument with Cauchy-Schwarz, which really demonstrates one way in which C-S can be thought of as “strengthening” pigeonhole.

Advertisements

Quasi-bleg: Why are there bump functions?

September 4, 2009

When I was learning analysis (beyond, say, first-year calculus), one of the facts that most surprised me was the fact that there are functions that were smooth (i.e., infinitely differentiable) and yet compactly supported. Of course, I didn’t think about it with that phrasing; there’s a pretty simple geometric interpretation of smoothness for most functions one encounters in calculus (actually, one rarely sees differentiable functions that aren’t smooth!) Specifically, if a function isn’t smooth at x, then there’s some sort of a “kink” at that point, or at least “around” that point.

Is this justified? Well, not totally, but let’s give a couple of examples to at least show why it’s a good first approximation. (more…)

GILA 3: The Fourier transform, and why it works

September 3, 2009

At the end of the last post, we outlined an approach to proving Roth’s theorem via a density-increment method. I’ll re-post it here, a little more fleshed out, for convenience.

Sketch. The idea is to deal with “structured sets” (like sets that are themselves long arithmetic progressions) and “pseudo-random sets” in different ways; we saw last time that the density-increment argument looks promising for structured sets, but “random” sets won’t succumb to it.

First, we need to make rigorous our intuition that “random enough” sets of positive density will have length-3 APs. The obvious way to do this is by showing the contrapositive. We want to find some suitable “randomness measure” such that random dense sets have high randomness measure. Now let S be a positive-density set without 3-APs; we want to show that this condition is enough to guarantee that S has low randomness measure.

Now for our modified-probabilistic argument. We want to show that “low-randomness” sets can’t be uniformly distributed (mod d) for every d; this guarantees the existence of an AP on which it has higher density. We pass to this AP; rinse, lather, and repeat, until our density rises above some pre-set bound (\delta > 2/3 will do nicely.) We also want to introduce a tool that tells us how much a set “looks like” an arithmetic progression with common difference d; this is because we need to obtain quantitative bounds for the density-increment argument, in order to ensure that the densities of our set on the sequence of APs don’t monotonically approach, say, 0.0000001.

So we’re looking for two technical tools; one that quantifies the “randomness” of a set, and one that lets us correlate our set with a candidate arithmetic progression to pass to so that our density-increment argument will actually work quantitatively. As it turns out, though, we can find one tool that will do both jobs for us; the discrete Fourier transform.

(more…)