Gordan Slade gave a talk today about an area of heavy current research interest in probability theory – the study of self avoiding random walks. Slade is a professor at the University of British Columbia and is a fairly distinguished expert on this subject, so there was no way I could miss the chance to see him speak. However, as a lowly mortal, I had the usual colloquium experience. I followed along for about ten or fifteen minutes, and then the train left the station, with me on the platform too dumbfounded to even wave goodbye. But the talk did have enough to capture my imagination, and so I will (in a totally non-rigorous fashion) give a brief account of what Slade talked about.
One of the basic processes in probability is that of a simple random walk, which for the sake of this post is a discrete-time process taking values in according to the doctrine that at any given time t and position p, the position at time t+1 is sampled uniformly from the lattice points neighboring p. So for example, where this could be generated by a sequence of coin flips, where with a heads you win $1 and with tails you lose $1. Your income from playing this game, given a fair coin, follows a simple random walk.
The simple random walk has been heavily studied. One of the most pervasive ideas of probability is that, while we may be interested in huge-yet-discrete problems, (indeed, the study of simple random walks is, at a first glance, just a giant potpourri of combinatorics, and e.g. the number of self avoiding random walks of length 71 on the lattice is of the order of magnitude , iirc), we can greatly improve our understanding of the matter at hand by taking some sort of “scaling limit”, For simple random walks, on for example, this involves essentially interpolating the walk (to get something that now has continuous paths), scaling time and space properly, and then in a suitable limit these things converge to Brownian motion. To a large degree, Brownian motion is easier to study and has much nicer properties, and yields certain approximate behavior regarding simple random walks, in the same sense that the central limit theorem allows statisticians to pretend that pretty much everything is normally distributed (e.g. histograms look like a bell curve).
Much of probability is rooted in physical problems, and the self avoiding random walk is no different in this regard. Physicists and chemists found themselves working on problems that involved “almost” random walk behavior, except that for physical considerations the paths were not allowed to intersect themselves. For example, in the study of chemical polymers one might hope to have some sort of probabilistic model of polymer behavior based on monomers linking together to form a chain according to some rules – but you know that any any time the polymer cannot intersect itself, in the sense that monomers occupy finite but nonzero volume (nonetheless the polymer is often twisted up in a nontrivial way).
The mathematical formalism covering such concepts is basically the following. Fix a space dimension . For each natural number , we consider the set of maps that satisfy the property that and for any distinct . This is the space of walks on the lattice that do not intersect themselves. To bring probability into this mix, we consider the uniform probability measure on – that is, we consider each path to be equally likely.
A few observations are in order. First, what we are dealing with is not really a true “self avoiding random walk” in the naive sense (I believe Slade used the term “myopic” random walk). Naively speaking, when somebody hears the term “self avoiding random walk”, they probably imagine a process that behaves like a random walk but intelligently samples points at each step to avoid hitting itself, for example by sampling uniformly from the neighboring unvisited lattice nodes. The problem with this “local description” is that a walk could box itself in. Imagine playing that new worm game where you try to eat the apples and die if you hit your tail: some path decisions lead to inevitable failure. The object we would have then really wouldn’t be a self avoiding walk of length , but rather of some shorter length.
Another observation is that these things are weird, from a probabilistic point of view. They are nothing like Markov processes – for two very good reasons. First, which is probably apparent, is that the behavior of a path has to crucially depend on the history of the path (and thus we do not have the Markov property). But what is perhaps a bit surprising (although not so much if you actually sit down and think about it) is that in fact the family of does not even generate a process in the usual sense – the probability distributions on , as varies, are not consistent in the Kolmogorov sense!
One of the big ideas in the study of self avoiding random walks is that their behavior really depends tremendously on the dimension . In the one dimensional case, their study is pretty much trivial: you only have two self avoiding random walks of any length, one going left, the other going right. The two dimensional case is fascinating and has been the subject of much research (including a fields medal for Wendelin Werner). There are a lot of conjectures about various quantities, and in particular, the details of the scaling limit. If simple random walks converge to brownian motion in an appropriate sense, what do self avoiding random walks converge to? In the case, there is a result which says that if there is such a scaling limit, then it must be SLE(8/3). Precisely what that is, I’m not so sure, but needless to say that this is still an open problem, because while the target for a scaling limit has been found (a huge accomplishment in itself), no such functional limit theorem has actually been established.
For , not a whole lot is known. And I mean that seriously – people have simulated and come up with some non-rigorous ideas of the behavior, but that’s it. By the way, Slade had a pretty good quote about this: “For physicists, these are called facts. For mathematicians, they are predictions.”
For , Slade talked about some of his own work. I didn’t really follow this a whole lot – there are a lot of really sophisticated analysis, brutal combinatorics, and tools from quantum field theory. One thing that stuck out in my head is a result of Slade and David Brydges that hinged on an argument using the so-called Berezin Integral, which is a functional integral used in quntum field theory to do stuff with Fermionic fields. Fucking magnets, how do they work?
For there is actually a fairly simple theory. The scaling limit is just Brownian motion! The heuristic Slade gave is is that basically, the path of a simple random walk is generically a two dimensional object, and so for it two be self avoiding we can think about splitting it up into two parts and then asking that these don’t intersect – and when there’s enough room that things naturally tend to not intersect. So asymptotically, simple random walks are all (w/ asymptotic probability 1) self avoiding, and so the scaling limit for self avoiding random walks reduces to the scaling limit for simple random walks. Or something like that.
One last fact I found interesting was that, at least in the case , one can consider other regular tessellations of the plane (the lattice is basically walking along a tessellation by squares), and in fact these all have the same asymptotic behavior (with regard to how they scale). This was pretty surprising to me, and I sort of wondered how exactly the underlying geometry of the space enters into this behavior (i.e. a square lattice and a triangular lattice still are both approximations of the euclidean plane, but what would happen if we looked at self avoiding random walks along a regular tesselation of a hyperbolic plane?).