The extremal toolbox: A matrix problem

January 10, 2010

I’m starting a new series of posts this semester where I get “back to basics.” One of the few areas of mathematics in which I can claim anything even in the same connected component as “expertise” is extremal combinatorics. Unfortunately for me and my lazy, big-picture brain, though, extremal combinatorics is very much a “problem-solving” subject, with a relatively small number of tools that are used to solve all sorts of different problems. So without some practice solving these problems, or expositing the solutions, it’s easy to get rusty.

Hence, “The Extremal Toolbox.” In each post, I’ll take a (solved!) problem in extremal combinatorics — anything from Sperner’s theorem to Kakeya over finite fields, as long as there’s an extremal flavor — and try to break down a proof into its component parts.

Today I’m going to examine a problem which appeared on MathOverflow some time ago, which I didn’t quite solve (but came within epsilon of!) The relevant post is here; if you don’t care to click through, here’s the problem.

Let M be an n \times n matrix with non-negative integer entries. Suppose further that if m_{ij} is 0, then the sum of all the entries in the ith row or the jth column is at least n. Then the sum of all the entries in M is at least n^2/2.

Read the rest of this entry »

Bleg: What’s the most recent day no one alive was born

January 5, 2010

Inspired by Michael Lugo’s post on reconstructing a person from their DOB, zipcode, and gender.

If you, for whatever reason, ever watch the Today show, you’ll notice that one of the recurring features is the hosts listing the names of some men and women who are turning 100. Becoming a centenarian is a reasonably big accomplishment — in the U.S., it nets you a congratulatory letter from the President, for example. But if you look into it, you’ll notice that you can find someone turning 100 on pretty much any given day. Usually not someone particularly well-known, but certainly someone. (I tried to find someone famous and vaguely math-related who just turned or is turning 100 for this post, but couldn’t; however, the fascinating economist Ronald Coase turned 99 last week.) It’s almost certainly true that on any given day, someone somewhere in the world is in fact celebrating their 100th birthday. But go ten years further, and you find almost no one who lives to 110. Actually, I know of only one supercentenarian, living or not, who is interesting for reasons apart from his longevity — the late Vietoris, the topologist, probably best known as half of the Vietoris-Rips complex and the Mayer-Vietoris sequence. Odds are pretty good that no one alive is turning 110 today, or tomorrow, or (sadly) New Years’ Day.

So… a question is starting to take shape. On every day between December 29, 1909, and today, someone was born who is still living today. But much earlier than that, and the above statement begins to be false. So what’s the most recent day that no one living was born on?

Read the rest of this entry »

What’s a “locally determined graph property?”

January 1, 2010

This has nothing to do with the rest of the post, but I’ll put it here so you read it before you get bored. I’d like to thank my readers (all seven of you) for supporting this blog in the first six months or so of its existence, and hope that you’ll stick around (and be joined by hundreds of new readers…) to hear my sporadic ramblings and wild ravings in the next year. Here’s to a happy and successful 2010!

Over at MathOverflow, Gjergji Zaimi asks (in a criminally under-voted-for question): How can we obtain global information from local data in graph theory?  This is something that perhaps everyone working in or around graph theory has asked themselves, in some form, at some point — I know I have. So it’s not surprising that Gjergji’s question has received many different answers with many different interesting things to say.

I originally wanted to write a post trying to “answer” Gjergji’s question as best I could, but quickly realized the futility of that goal — it’s such a broad and deep question that I doubt if anyone could answer it concisely, and I know I couldn’t! So instead I’ll just talk about an \epsilon of the question — what does it even mean, “local data?”

Read the rest of this entry »

Excitement!

December 29, 2009

Sorry about the lack of new post; it’s coming. It turned out to be a more interesting problem than I at first thought; look for it around New Years’.

I’m working through some of the holes in my graph knowledge with my shiny new copy of Bollobas’ Modern Graph Theory. Chapter 1, Exercise 19 is a problem I’ve done before, but the way it’s presented makes me want to do it all over again:

Characterize the degree sequences of forests!

Exercise 17 is about the degree sequences of trees, and 18 extends it to forests with a fixed number of components — so this isn’t totally out of the blue. Still, it makes me wonder why more textbooks don’t end problems with exclamation marks.

The coupon collectors’ problem

December 28, 2009

This is a considerably lower-level post than usual, which I’ll (following Terry Tao) also blame on the holidays; there’s another, even less mathematical post in the works which I hope to finish sometime tomorrow.

How many times do you need to flip a coin before you expect to see both heads and tails? How many times do you need to roll a die before you expect to see all the numbers 1-6? These are two instances of the coupon collectors’ problem. Wikipedia gives not one, but two nice solutions to the problem, but there’s an even nicer “back-of-the-envelope” calculation which gives you the correct asymptotics for virtually nothing, and (I like to think) shows the power of thinking “categorically” at even a very low level.

So let’s give a statement of the problem. A company — say Coca-Cola, for concreteness — is holding a contest where everyone who collects one each of n different “coupons” wins some prize. You get a coupon with each purchase of a Coke, and each coupon is equally likely. What’s the expected number of Cokes you have to buy in order to collect all the coupons?

If you do some experimentation (or calculation) with small instances, you’ll see that this number seems to be growing somewhat faster than n. For n = 2, for example, the expected number is 3, and for n = 3 it’s 7. But how much faster? Like n^2 \text{?}~n \sqrt{n}? Or just a constant times n?

None of the above, as it happens, and you might have already guessed (or known) that the correct order of growth is O(n~log~n). Here’s how you can figure this out for yourself.

Think of the collection as a function from Coke bottles to (equivalence classes of) coupons. If we’ve collected all the coupons, the function is surjective. So we can rephrase “What is the probability that, after I buy m Coke bottles, I have collected all n coupons” as “What is the probability that a random function from a set with m elements to a set with n elements is surjective?” Actually, we’ll estimate the probability that it’s not surjective.

If the function isn’t surjective, then its image contains at most n-1 elements. Fixing n-1 elements, the probability that our random function takes some element of the domain to this subset is \frac{n-1}{n}. Since the appropriate events are all independent, the probability that the random function takes every element to the subset is therefore (\frac{n-1}{n})^m.

Now there are n possibilities for the subset of size n-1, so we apply the union bound and say that the probability that our random function is not surjective is at most n (\frac{n-1}{n})^m. (Of course, this is an upper bound, and there is an error term; but we’ll return to that in a bit.)

So we want this expression to be smaller than, say, 1/10, which means that (\frac{n-1}{n})^m = 1/10n. But when n is large, we have that (1 - \frac{1}{n})^n is about 1/e, so m has to be on the order of ln~n!

Now we’ll backtrack a bit. How do we know that the union bound was reasonably tight? After all, we counted functions whose image had size n-2 twice! Well, if you go back through the analysis and do inclusion-exclusion, you’ll see that the probability winds up being close to 1 when m << n log n — but I don’t know of a computation-free way to argue that O(n log n) is asymptotically right! Does anyone else?

So how is this “categorical thinking?” Well, it’s not, really. Category theory only really starts to get mildly interesting when you talk about functors, and doesn’t come into its own right until natural transformations are introduced. But if you’ve learned to think categorically, you see morphisms where other people see objects — in this case, a function where others might see a set — and while this is rarely enough to apply abstract-nonsense tools, it is enough to broaden your intuition and see paths you might have otherwise missed. And this is at least as useful.

An etymological question

December 18, 2009

Galois was of course the first to highly successfully use the notion of a field. However, if ones reads his papers they’ll see that he never explicity gave the concept of an algebraic structure closed under addition, subtraction, commutative multiplication, and division a name. Dedekind would be the first to do that; he gave the name Körper, or “body,” to what we’d today call a number field. A couple of decades later, E.H. Moore of Chicago would introduce the term “field” in English.

“Körper” caught on fairly quickly among Continental mathematicians, giving us the French corps, and from there it spread to Spanish and Portuguese; in the other direction, the German mutated into Hungarian “test” and Polish “ciało”, both essentially with the same meaning of “body.”

However, in Italian and most of the Slavic languages, the word for “field” is also the agricultural term. This means that the algebraic terminology didn’t solidify until considerably later, probably between the World Wars at earliest. This is understandable; while both Italy and Russia had strong mathematical communities around the turn of the last century, they were somewhat isolated and if nothing else had relatively fewer top-tier algebraists than the French or, especially, the German schools.

What’s really curious is the following: In both Italian and Russian, as I mentioned, the word for English “field” is a literal translation of “field.” In pretty much every language, the word for “ring” can also refer to a thing that you wear on your finger. But in Italian and (several of) the Slavic languages — and in these languages alone, as far as I know — the word for “skew field”, or “division ring”, translates to English as “body”! This seems to me to be a rather exceptional situation — surely either a modification of “ring” or of “field” will do, as in every other language, but it seems not to be the case. So there are two open problems here:

  1. Explain the situation that caused “field” to replace “body” to refer to a commutative division ring, but not to refer to a division ring in general, in Italian and Russian.
  2. Are there any other examples of crufty terminology that’s unique to one or two languages (or closely-related language families?)

The importance of choosing the right model

December 13, 2009

I’ve had the idea for this post bouncing around in my head for several months now, but now that I don’t have classes to worry about I can finally get around to writing it. I want to talk about some “pathological examples” in computer science, in particular in complexity and computability theory.

I wanted to post a picture of William H. Mills with this post, in the spirit of Dick Lipton’s blog. Unfortunately I can’t find one! Mills was a student of Emil Artin in the ’40s; he finished his thesis in 1949 and promptly (as far as I can tell) disappeared until the ’70s, when we find some work by a William H. Mills on combinatorial designs. After this, there’s nothing of note until Dr. Mills passed away several years ago at the age of 85.

While his work (assuming it’s his) on combinatorial designs looks interesting, W.H. Mills’ small place in history is assured by a short and unassuming paper from 1947, published when he was still a student. In this paper he showed that there’s a constant, which he called A, such that [A^{3^n}] is prime for all integers n \geq 1. Nowadays it’s called Mills’ constant.

Read the rest of this entry »

Where do graphs live?

December 5, 2009

This post came out of some thoughts I posted (anonymously, but mostly because I didn’t feel like registering) over at nLab. I don’t think it’s a secret that I’m heavily interested in the relationships between category theory and combinatorics, and more generally the ways in which we can use “structured” algebraic objects and “continuous” topological objects to gain information about the unstructured discrete objects in combinatorics. That said, the folks over at the nLab work on some crazy abstract stuff, which seems about as far away as possible from the day-to-day realities of graph theory or set systems. And maybe it is — but I hope it’s not, and as far as I’m concerned, this is a windmill that deserves to be tilted at. (After all, it might be a giant.)

So as my jumping-off point, I’ll take my observation from last time that the relationship between graphs and digraphs is analogous to the one between groupoids and categories. I briefly mentioned something called a quiver, which can be thought of as any of the following:

  • Another name for a digraph, which categorical people use when they don’t want us combinatorialists stomping in and getting the floor all muddy;
  • A “free category,” i.e., one in which there are no nontrivial relations between composition of morphisms;
  • An algebraic object whose representations we want to consider; it’s worth thinking of this way mostly because of the “freeness,” although if you try to define it more formally you’ll probably end up with the previous definition;
  • What you get when you take (part of) a category and forget all the rules for how morphisms compose.

This last point is the most interesting one for our purposes, since it’s clearly an algebraic object but isn’t as restrictive as “free category,” and thus has a chance of capturing the unstructured behavior of the combinatorial zoo. But it’s tricky to turn this into a rigorous definition that actually includes everything we want to be a quiver… so we’ll just use “quiver” as a fancy name for “digraph.” However, there’s an important philosophical lesson to be learned from the final point, so I’ll set it off:

Philosophical lesson. The edges of a quiver shouldn’t carry any information except for the vertices they are incident to; more generally, paths in a quiver shouldn’t carry any information except for their sequence of vertices.

Read the rest of this entry »

More on graphs and digraphs

November 16, 2009

So this question’s been bugging me ever since I first thought it up, and I figured (in the spirit of MaBloWriMo, which by now is pretty much dead on this blog) that I’d ask about it here — I need to give Math Overflow a break.

The question concerns adjoint functors, which I don’t understand half as well as I’d like, but enjoy thinking about anyway. One of the (many!) motivating examples that adjoint functors generalize is the common “free/forgetful” dichotomy. For instance, there’s a functor from the category of groups (say) to the category of sets, which is defined by simply “forgetting” the group structure and giving back the underlying set. This functor doesn’t have an inverse, of course; that would make the two categories isomorphic, which is way too much to expect. Nor does it have an “inverse up to natural transformation.” That would make the categories equivalent, which is almost as good as isomorphism. But it does have the next-best thing after that: a functor in the opposite direction which comes with a natural isomorphism on some hom-sets. This is the free functor, that assigns to each set the free group on that set. These functors are called adjoint functors.

Read the rest of this entry »

Generalized LYM inequalities

November 6, 2009

I’ve been thinking about this problem ever since an old post of Qiaochu’s first raised the question, and I’ve been frustrated by my inability to solve it. I could post it on MO, but I sort of already have, and anyway it raises questions which are too ill-formed right now to be right for MO. So anyway, here we go:

A lot of problems in extremal combinatorics correspond to finding large antichains in partially ordered sets. (By the way, all posets in this post will be assumed to have a least element.) Classically speaking, Dilworth’s theorem completely characterizes the size of antichains in posets; however, this is often tricky to apply, since it’s not always clear whether a partition into chains is minimal. In addition, it’s sometimes the case (particularly with infinite posets) that there are infinitely long antichains, but a nontrivial bound should still be attainable. The way to get by both of these obstacles is to assign weights to the elements of the poset, and rather than looking for large antichains, we look for antichains with high total weight.

The classical example of this solves the problem of finding the largest antichain in the lattice of subsets of a given finite set — the content of Sperner’s theorem. Read the rest of this entry »