Chromotopy

Watercooler research

Motivating Goodwillie's construction

February 24, 2011, by Eric Peterson

I have tried at several points over the past year or so to get through Goodwillie’s three papers on his calculus of functors, but each time I make it through just a handful of pages before getting discouraged or distracted. Some parts of it have started to sink in, though, and I wanted to share what I’ve collected so far with you all. Without a doubt this viewpoint is known to everyone versed in the subject, but I haven’t found this analogy written down publicly yet. Here goes:

Polynomial functions are a nice class of functions to consider for us because they have two important properties: first, they have an ascending filtration by degree, so that if a polynomial is of degree at most n then it also is of degree at most n+1, and second, a degree n polynomial is completely determined by its value at any (n+1) sample points. Given an arbitrary function f, we can build a polynomial of degree n , called a Taylor polynomial, which is sort of the best polynomial approximation to f at a fixed basepoint (say 0) in the following sense: and . These polynomials can be constructed in a variety of ways, but here’s a cool one that’s relevant for us:

Pick (n+1) sample points , and build the interpolating polynomial passing through the points . Then, is defined using the limit

If you haven’t seen this before, then it’s a good idea to work through an example. Say , , and . Then , where you use whatever your favorite computational method is to determine that , , and . Using L’Hopital and to take limits, we find that , , and , and hence . I promise that this works in general; pick an arbitrary function and number of points, and you’ll get the first chunk of its Maclaurin expansion.

We’ll now spend the rest of this discussion translating this setup, line by line, into the ridiculous context of homotopy functors between pointed simplicial model categories C and D.^{1} The first thing we should do is specify what it means to pick a collection of sample points and their relation to a favorite basepoint; to do so, we’ll introduce cubes. Let [d] denote the set . Then, , the power set of [d], is a partially ordered set and hence a category. A *d-cube in C*, then, is a diagram ; the reason for the name is that the indexing category P[1] looks like a line, P[2] like a square, P[3] like a cube, and so on through the hypercubes. A cube is said to be *Cartesian* (resp. *coCartesian*) if the initial (resp. final) vertex is weakly equivalent to the homotopy limit (resp. colimit) of the remainder of the cube with that vertex deleted. A cube being (co)Cartesian is a statement about redundancy; the data contained in these initial and terminal corners are recoverable using just the rest of the cube and a limiting or gluing procedure. There’s also a more extreme form of redundancy: a d-cube is said to be *strongly coCartesian* if all of its faces of dimension at least 2 are themselves coCartesian. When d is at least 2, this clearly implies that the cube is itself coCartesian; we’re asserting that to specify a strongly coCartesian cube, you need to describe the initial vertex, d maps of the form for , and everything else can be fleshed out by taking homotopy pushouts from there.

Now, we can define our analogues of polynomials: F is said to be *d-excisive* (or of degree at most d) when it sends strongly coCartesian (d+1)-cubes to Cartesian (d+1)-cubes. This completes the analogy of the first paragraph of this post: a d-excisive functor is also (d+1)-excisive, so these classes of functors is ascending, and this business about being strongly coCartesian corresponds to selecting an interesting basepoint value , (d+1) sample points together with their relations to the basepoint, and then after applying F just to the extraneous sample points we can reconstruct by taking a homotopy limit.^{2}

In our example, I didn’t select arbitrary sample points , but instead I picked to ease our computation. We’re going to do something similar in the Goodwillie setting; for a finite set T and an object , define , so is the cone on A, is the suspension or a 2-pointed cone, is a 3-pointed cone, and so on. For a fixed A and , the assignment is a strongly coCartesian cube, which is going to be our analogue of picking smart sample points.^{3} To compute , we basically force excisiveness to hnew:

Since itself gives a top corner for the cube and is defined with a limit, we have a natural map . Our analogue of letting the sample points tend to the basepoint doesn’t make much sense, but we do something that looks vaguely analogous:

It is pretty difficult to show that this is the right construction — in fact, Goodwillie writes in his paper that his own proof hardly makes sense to him. The key point is that the map factors through a Cartesian (d+1)-cube Y, i.e., we have maps . At each stage of the sequential limit defining , then, we can insert one of these Cartesian cubes and instead take a homotopy colimit through them, which guarantees that what we get in the end will be d-excisive.^{4}

There’s a natural map . Because the natural map is an equivalence when F is d-excisive, is also an equivalence when F is d-excisive. So, because is d-excisive, we get is a weak equivalence, which is one of the special properties of the original Taylor polynomials. The other equivalence, , is a little more technical; it requires results about sequential homotopy colimits commuting with finite homotopy limits, but I promise that it’s true too. This second equivalence gives us a map , and hence these approximating functors assemble into one big tower. A functor is said to be *analytic* when it’s weakly equivalent to the limit of this tower, or *with a radius of convergence* when it’s sometimes weakly equivalent to the limit of the tower.

The real utility of these properties is that they imply the universality of Goodwillie’s construction. If is a map to a d-excisive functor, then there exists a zigzag factoring .

When you read the example at the top of the page with the second order expansion of the exponential, you believed that what I was saying was true because you recognized the series — you already knew how to compute it. Namely, Taylor’s theorem gives us an express description of these polynomials as . This formula, miraculously, is mirrored in the Goodwillie calculus (for stable model categories). I will omit absolutely all details; I just want to give an example of the depth of the analogy we’re pursuing.

Define to be the part of f which is homogeneous of degree d. On the level of functors, we have . This object gives us one way of measuring when a functor is (d-1)-excisive; if it were, then would vanish everywhere. There’s another way to test for excisiveness called *cross-effects*:

where the cube of wedges is the strongly coCartesian cube whose T^{th} vertex is of the form . The cross-effects will also vanish when F is d-excisive, and it’s natural to ask how these two methods compare. Goodwillie produces the following formulas:

This is really a magical thing to write. This spectrum produced by applying the cross effects functor to the sphere spectrum plays the role of the d^{th} derivative , and then we also smash against d copies of X, just as in Taylor’s formula. Finally, the analogy for dividing by is the homotopy quotient by the evident action of the symmetric group on d letters on this smash product. Cute!

(Note: This is maybe not the best presentation of the Taylor coefficient, since it requires you to already know to compute it, which is sort of contrary to the point. There are other ways to compute the coefficient by linearizing the cross effects functor, without applying at any point.)

If you’re interested in reading more, you should pull up Goodwillie’s three papers, amusingly titled Calculus 1, Calculus 2, and Calculus 3, or you can check out Kuhn’s smartly-written, example-rich introduction to the calculus in general and also its interactions with and applications to chromatic homotopy theory.

^{1} - If you don’t have the stomach for model categories, you can always specialize to the case of spaces or, even better, spectra. Mostly I just want to be able to take homotopy (co)limits.

^{2} - To check your understanding of the definition, you should check that homology functors are 1-excisive, or *linear*. In fact, Brown representability says that 1-excisive functors taking coproducts to coproducts are all homology functors. Without this second condition, 1-excisive functors are harder to classify; on spectra, mapping space functors and Bousfield localizations are 1-excisive, for instance!

^{3} - This is maybe more like picking a primitive (n+1)^{th} root of unity and then using as the interpolating points, rather than using the points I picked above.

^{4} - To check that you understand this construction, let’s compute . We pick an object A, then take a homotopy colimit along to fill the square out with the suspension of A. Then, we apply the identity functor to the corner and take a homotopy pullback of the weakly equivalent corner to compute , and hence !