It is notoriously difficult to reconcile the two most basic frameworks of the fundamental laws of nature, namely general relativity and quantum theory. The first describes gravity and how it influences space and time—you could say it relates to the large—while the second characterizes the behavior of all known elementary particles—relating to the small.
That they will have to be brought together is nowhere more apparent than in trying to understand the beginning of our universe: that is to say, the very beginning of space, time and, well, everything else.
Such a beginning was derisively called the “big bang” by the cosmologist Fred Hoyle, who ridiculed the idea of there once having been no universe, then a big bang and suddenly the universe, full of light and matter, and expanding as if on space-time steroids. The name has stuck, and for now it seems that we are stuck with the idea too.
Why is it so hard to unify gravity and quantum theory? To illustrate this, it helps to recall the bizarreness of the quantum world. According to quantum theory, nature explores all possibilities to determine the probability with which an event occurs. So when a ball is thrown, absolutely all conceivable trajectories are considered, but only a few weigh in significantly. (In the case of the ball, in fact, only the familiar classical trajectory is relevant.)
When applied to space-time, however, it gets tricky. All possible space-time evolutions must be considered, but the trouble is that on tiny scales, the fabric of space-time tends to become very messy to the extent that it is typically impossible to reliably calculate anything. And to understand the big bang, we would certainly need to know what happens with space-time on tiny scales.
Faced with those difficulties, three cosmologists had a truly ingenious and elegant idea. They were James Hartle and Stephen Hawking, with their “no-boundary” proposal, and Alexander Vilenkin, with his “tunneling” model, both theories formulated in the early 1980s.
The idea is that one should not consider all possible space-time evolutions but only those that have a smooth initial space-time geometry. You can picture this by imagining that the early universe would have been rounded off like the surface of a ball, not only in space but also in time. Time would have had no edge—hence the name of the proposal.
This idea has two highly desirable consequences. The first is that it might actually allow one to calculate things, as the geometries are forced to be smooth initially, and hence one might not have to deal with the small-scale messiness. The second is that by providing a theory of what effectively replaces the big bang, we would know the starting point of the universe. This would be like having the first verses of the scientific Book of Genesis.
Still, until recently, it remained difficult to calculate the true consequences of this idea. Despite the simplification of having to deal only with these “no-boundary” geometries, it is not easy to calculate how all the different space-time evolutions sum up, or how to determine which ones are the most important. Some aspects of the calculations that had been done since the 1980s were rigorous; other aspects were based on guesses, or, to make it sound less dubious, on “intuition.”
It should be said that this often works. Famous physicists tend to have a knack for knowing/guessing the right answers even in the absence of a full-blown calculation. Moreover, those tentative results looked very promising.
A few months ago, my collaborators Job Feldbrugge and Neil Turok, both of the Perimeter Institute in Canada, and I realized that there exists a mathematical framework, which has been continuously developed by mathematicians over the last 100 years, that is perfectly suited to performing this kind of calculation reliably.
This framework is called the Picard-Lefschetz theory, named after the two mathematicians who initiated the study of these techniques. It is only over the last few years that physicists have become aware of its existence.
Using these methods, we encountered a surprise. Even when restricting to space-time shapes that are completely smooth initially, the resulting universes have larger and larger fluctuations. In other words, the geometries that develop strong irregularities contribute the most to the final answer. This implies that one would get a highly crumpled universe popping out of nothing and presumably collapsing again right away, rather than expanding into the vast universe we know. It is as if smoothing the universe on one end only results in tying it into knots everywhere else.
So the idea of a smooth beginning does not work, at least not in its present form. Can the idea be rescued? Perhaps, but our calculation has in a sense been quite minimal: In addition to powerful mathematical techniques, we have implemented only the most basic and established principles of physics. Rescuing the idea would demand a rather radical departure from one of those principles, so perhaps it is more reasonable to think that the geometry simply was not of the “no-boundary” type.
The most surprising aspect of our work is the strong interaction between the large and the small. The tiniest fluctuations somehow know about the overall geometry of space-time, and know that they should grow out of bounds.
It is this aspect that we are now trying to understand. After all, our large universe exists—so what mechanism could have kept these fluctuations in check when the universe was tiny? What were the true conditions at the big bang, from which everything we know followed? The mystery remains intact, and our search continues.
Jean-Luc Lehners is leader of the theoretical cosmology group at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute) in Germany.
More from Newsweek