Archive for the ‘expository’ Category

I’m in love. What’s that song?

March 29, 2010

I’ve been on spring break for the past week, so this is even older news than it would have been, but Alex Chilton died earlier this month. If you don’t know who Alex Chilton is, he started off as the singer of the Box Tops, who scored a late-’60s hit with “The Letter.” But in certain circles he’s better known as the lead singer and songwriter for the defining power pop band, Big Star. Big Star’s music was a formative influnce on me as a person, if not as a mathematician, so I’m breaking from my usual routine of crackpot mathematics and expository articles to pay tribute.

Most of what I could say has already been said better, so I’ll link to some other tributes shortly. People have talked about how Big Star were a major influence on all sorts of bands in the ’80s and ’90s up to the present, from R.E.M. to the Replacements to Elliott Smith. People have talked about the timeless nature of Big Star’s songs, about how “Thirteen” is maybe the ultimate expression of what it’s like to be that age. But one thing that’s perhaps been lost in the chatter is the fact that so much of Chilton’s work with Big Star is just plain wonderful music. Witness “Nighttime” and “Blue Moon,” back-to-back tracks from Big Star’s final album, Third (also known as Sister Lover) — to my ears, two of the most achingly beautful pop songs ever recorded.

The best tribute I’ve read — maybe the definitive one — is Paul Westerberg’s in the New York Times. (Westerberg wrote and performed, with the Replacements, the song that gave this post its title.) But the best possible tribute, in my mind, would be if this blog post turned one person on to the music of the late, great Alex Chilton. It might change your life. More likely it won’t. But either way, it’s great damned music, and I hope — and expect — that it’ll live on far after I’m gone.

The extremal utility belt: Cauchy-Schwarz

March 10, 2010

I found this little gem in an old post on Terry Tao’s blog. There’s not really enough content in it to merit an entire Extremal Toolbox post, but it’s too cool not to point out.

Theorem. Graphs of order n and girth at least 5 have o(n^2) edges.

Proof. Suppose that G has 1/2*c*n^2 edges. Define the function A: V^2 \rightarrow \{0, 1\} to be the “adjacency characteristic function.” Now, by Cauchy-Schwarz:

n^4 \Sigma A(x_1, y_1)A(x_1, y_2)A(x_2, y_1)A(x_2, y_2)

\geq (n \Sigma A(x, y_1) A(x, y_2))^2

\geq (\Sigma A(x, y))^4 = \frac{1}{16} c^4 n^8.

Clearly for c fixed, \frac{1}{16} c^4 n^8 is unbounded as n \rightarrow \infty. But the first expression is n^4 times the number of (possibly degenerate) 4-cycles in the graph; it’s easy to check that there are O(n^3) degenerate 4-cycles, so as n is unbounded our graph must contain a 4-cycle. QED

Now, it’s possible to get better bounds by cleverly doing “surgery” on the graph and just using pigeonhole. (See here for details.) But it’s tricky and far less beautiful than the argument with Cauchy-Schwarz, which really demonstrates one way in which C-S can be thought of as “strengthening” pigeonhole.

The extremal toolbox: A matrix problem

January 10, 2010

I’m starting a new series of posts this semester where I get “back to basics.” One of the few areas of mathematics in which I can claim anything even in the same connected component as “expertise” is extremal combinatorics. Unfortunately for me and my lazy, big-picture brain, though, extremal combinatorics is very much a “problem-solving” subject, with a relatively small number of tools that are used to solve all sorts of different problems. So without some practice solving these problems, or expositing the solutions, it’s easy to get rusty.

Hence, “The Extremal Toolbox.” In each post, I’ll take a (solved!) problem in extremal combinatorics — anything from Sperner’s theorem to Kakeya over finite fields, as long as there’s an extremal flavor — and try to break down a proof into its component parts.

Today I’m going to examine a problem which appeared on MathOverflow some time ago, which I didn’t quite solve (but came within epsilon of!) The relevant post is here; if you don’t care to click through, here’s the problem.

Let M be an n \times n matrix with non-negative integer entries. Suppose further that if m_{ij} is 0, then the sum of all the entries in the ith row or the jth column is at least n. Then the sum of all the entries in M is at least n^2/2.

(more…)

Bleg: What’s the most recent day no one alive was born

January 5, 2010

Inspired by Michael Lugo’s post on reconstructing a person from their DOB, zipcode, and gender.

If you, for whatever reason, ever watch the Today show, you’ll notice that one of the recurring features is the hosts listing the names of some men and women who are turning 100. Becoming a centenarian is a reasonably big accomplishment — in the U.S., it nets you a congratulatory letter from the President, for example. But if you look into it, you’ll notice that you can find someone turning 100 on pretty much any given day. Usually not someone particularly well-known, but certainly someone. (I tried to find someone famous and vaguely math-related who just turned or is turning 100 for this post, but couldn’t; however, the fascinating economist Ronald Coase turned 99 last week.) It’s almost certainly true that on any given day, someone somewhere in the world is in fact celebrating their 100th birthday. But go ten years further, and you find almost no one who lives to 110. Actually, I know of only one supercentenarian, living or not, who is interesting for reasons apart from his longevity — the late Vietoris, the topologist, probably best known as half of the Vietoris-Rips complex and the Mayer-Vietoris sequence. Odds are pretty good that no one alive is turning 110 today, or tomorrow, or (sadly) New Years’ Day.

So… a question is starting to take shape. On every day between December 29, 1909, and today, someone was born who is still living today. But much earlier than that, and the above statement begins to be false. So what’s the most recent day that no one living was born on?

(more…)

What’s a “locally determined graph property?”

January 1, 2010

This has nothing to do with the rest of the post, but I’ll put it here so you read it before you get bored. I’d like to thank my readers (all seven of you) for supporting this blog in the first six months or so of its existence, and hope that you’ll stick around (and be joined by hundreds of new readers…) to hear my sporadic ramblings and wild ravings in the next year. Here’s to a happy and successful 2010!

Over at MathOverflow, Gjergji Zaimi asks (in a criminally under-voted-for question): How can we obtain global information from local data in graph theory?  This is something that perhaps everyone working in or around graph theory has asked themselves, in some form, at some point — I know I have. So it’s not surprising that Gjergji’s question has received many different answers with many different interesting things to say.

I originally wanted to write a post trying to “answer” Gjergji’s question as best I could, but quickly realized the futility of that goal — it’s such a broad and deep question that I doubt if anyone could answer it concisely, and I know I couldn’t! So instead I’ll just talk about an \epsilon of the question — what does it even mean, “local data?”

(more…)

The coupon collectors’ problem

December 28, 2009

This is a considerably lower-level post than usual, which I’ll (following Terry Tao) also blame on the holidays; there’s another, even less mathematical post in the works which I hope to finish sometime tomorrow.

How many times do you need to flip a coin before you expect to see both heads and tails? How many times do you need to roll a die before you expect to see all the numbers 1-6? These are two instances of the coupon collectors’ problem. Wikipedia gives not one, but two nice solutions to the problem, but there’s an even nicer “back-of-the-envelope” calculation which gives you the correct asymptotics for virtually nothing, and (I like to think) shows the power of thinking “categorically” at even a very low level.

So let’s give a statement of the problem. A company — say Coca-Cola, for concreteness — is holding a contest where everyone who collects one each of n different “coupons” wins some prize. You get a coupon with each purchase of a Coke, and each coupon is equally likely. What’s the expected number of Cokes you have to buy in order to collect all the coupons?

If you do some experimentation (or calculation) with small instances, you’ll see that this number seems to be growing somewhat faster than n. For n = 2, for example, the expected number is 3, and for n = 3 it’s 7. But how much faster? Like n^2 \text{?}~n \sqrt{n}? Or just a constant times n?

None of the above, as it happens, and you might have already guessed (or known) that the correct order of growth is O(n~log~n). Here’s how you can figure this out for yourself.

Think of the collection as a function from Coke bottles to (equivalence classes of) coupons. If we’ve collected all the coupons, the function is surjective. So we can rephrase “What is the probability that, after I buy m Coke bottles, I have collected all n coupons” as “What is the probability that a random function from a set with m elements to a set with n elements is surjective?” Actually, we’ll estimate the probability that it’s not surjective.

If the function isn’t surjective, then its image contains at most n-1 elements. Fixing n-1 elements, the probability that our random function takes some element of the domain to this subset is \frac{n-1}{n}. Since the appropriate events are all independent, the probability that the random function takes every element to the subset is therefore (\frac{n-1}{n})^m.

Now there are n possibilities for the subset of size n-1, so we apply the union bound and say that the probability that our random function is not surjective is at most n (\frac{n-1}{n})^m. (Of course, this is an upper bound, and there is an error term; but we’ll return to that in a bit.)

So we want this expression to be smaller than, say, 1/10, which means that (\frac{n-1}{n})^m = 1/10n. But when n is large, we have that (1 - \frac{1}{n})^n is about 1/e, so m has to be on the order of ln~n!

Now we’ll backtrack a bit. How do we know that the union bound was reasonably tight? After all, we counted functions whose image had size n-2 twice! Well, if you go back through the analysis and do inclusion-exclusion, you’ll see that the probability winds up being close to 1 when m << n log n — but I don’t know of a computation-free way to argue that O(n log n) is asymptotically right! Does anyone else?

So how is this “categorical thinking?” Well, it’s not, really. Category theory only really starts to get mildly interesting when you talk about functors, and doesn’t come into its own right until natural transformations are introduced. But if you’ve learned to think categorically, you see morphisms where other people see objects — in this case, a function where others might see a set — and while this is rarely enough to apply abstract-nonsense tools, it is enough to broaden your intuition and see paths you might have otherwise missed. And this is at least as useful.

The importance of choosing the right model

December 13, 2009

I’ve had the idea for this post bouncing around in my head for several months now, but now that I don’t have classes to worry about I can finally get around to writing it. I want to talk about some “pathological examples” in computer science, in particular in complexity and computability theory.

I wanted to post a picture of William H. Mills with this post, in the spirit of Dick Lipton’s blog. Unfortunately I can’t find one! Mills was a student of Emil Artin in the ’40s; he finished his thesis in 1949 and promptly (as far as I can tell) disappeared until the ’70s, when we find some work by a William H. Mills on combinatorial designs. After this, there’s nothing of note until Dr. Mills passed away several years ago at the age of 85.

While his work (assuming it’s his) on combinatorial designs looks interesting, W.H. Mills’ small place in history is assured by a short and unassuming paper from 1947, published when he was still a student. In this paper he showed that there’s a constant, which he called A, such that [A^{3^n}] is prime for all integers n \geq 1. Nowadays it’s called Mills’ constant.

(more…)

More on graphs and digraphs

November 16, 2009

So this question’s been bugging me ever since I first thought it up, and I figured (in the spirit of MaBloWriMo, which by now is pretty much dead on this blog) that I’d ask about it here — I need to give Math Overflow a break.

The question concerns adjoint functors, which I don’t understand half as well as I’d like, but enjoy thinking about anyway. One of the (many!) motivating examples that adjoint functors generalize is the common “free/forgetful” dichotomy. For instance, there’s a functor from the category of groups (say) to the category of sets, which is defined by simply “forgetting” the group structure and giving back the underlying set. This functor doesn’t have an inverse, of course; that would make the two categories isomorphic, which is way too much to expect. Nor does it have an “inverse up to natural transformation.” That would make the categories equivalent, which is almost as good as isomorphism. But it does have the next-best thing after that: a functor in the opposite direction which comes with a natural isomorphism on some hom-sets. This is the free functor, that assigns to each set the free group on that set. These functors are called adjoint functors.

(more…)

GILA 4: The pseudorandom case

September 19, 2009

So in the last post, we defined the discrete Fourier transform and gave some of its basic properties. At the end, we claimed that it gives us a simple notion of pseudorandomness that allows us to make rigorous the intuition that “pseudorandom subsets of \mathbb{Z}/N\mathbb{Z} should have many arithmetic progressions.” Today we’re going to justify this notion of pseudorandomness, and work through this — the easy case — of the proof of Roth’s theorem for \mathbb{Z}/N\mathbb{Z}. At the end, we’ll sketch how to modify the method for the regular, finitary version of the theorem.

(more…)

Quasi-bleg: Why are there bump functions?

September 4, 2009

When I was learning analysis (beyond, say, first-year calculus), one of the facts that most surprised me was the fact that there are functions that were smooth (i.e., infinitely differentiable) and yet compactly supported. Of course, I didn’t think about it with that phrasing; there’s a pretty simple geometric interpretation of smoothness for most functions one encounters in calculus (actually, one rarely sees differentiable functions that aren’t smooth!) Specifically, if a function isn’t smooth at x, then there’s some sort of a “kink” at that point, or at least “around” that point.

Is this justified? Well, not totally, but let’s give a couple of examples to at least show why it’s a good first approximation. (more…)


Follow

Get every new post delivered to your Inbox.

Join 27 other followers