This document was converted to html format from Word 7. Since html cannot read simbols like lambda or rho, they will not appear in the text (although they do in the equations). I will do my best to fix this. Hope you'll like the paper. It is intended to be a guide for non-specialists, not a sophisticated scientific analysis on the cosmological constant.


Astronomy 1 Term Paper

Claudiu Simion


Do We Really Need a Cosmological Constant?

Introduction

In 1916, Albert Einstein made up his General Theory of Relativity without thinking of a cosmological constant. The view of that time was that the Universe had to be static. Yet, when he tried to model such an universe, he realized he cannot do it unless either he considers a negative pressure of matter (which is a totally unreasonable hypothesis) or he introduces a term (which he called cosmological constant), acting like a repulsive gravitational force.

Some years later however, the Russian physicist Friedmann described a model of an expanding universe in which there was no need for a cosmological constant. The theory was immediately confirmed by Hubble's discovery of galaxies' red shift. Following from that, Hubble established the law that bears his name, according to which every two galaxies are receding from each other with a speed proportional to the distance between them. That is, mathematically:

V=H D

where H was named Hubble's constant.

From this point on, the idea of a cosmological constant was for a time forgotten, and Einstein himself called its introduction "his greatest blunder", mostly because it was later demonstrated that a static Universe would be in an unstable equilibrium and would tend to be anisotropic. In most cosmological models that followed, the expansion showed in the Hubble's law simply reflected the energy remained from the Big Bang, the initial explosion that is supposed to have generated the Universe.

It wasn't until relatively recently - 1960's or so, when more accurate astronomical and cosmological measurements could be made - that the constant began to reappear in theories, as a need to compensate the inconsistencies between the mathematical considerations and the experimental observations. I will discuss these discrepancies later. For now, I'll just say that this strange parameter, lambda- as Einstein called it, became again an important factor of the equations trying to describe our universe, a repulsive force to account not against a negative matter pressure, but for too small an expansion rate, as measured from Hubble's law or cosmic microwave background radiation experiments. I will show, in the next section, how all these cosmological parameters are linked together, and that it is sufficient to accurately determine only one of them for the others to be assigned a precise value. Unfortunately, there are many controversies on the values of such constants as the Hubble' constant - H, the age of the Universe t, its density , its curvature radius R, and our friend lambda.

Although I entitled my paper with a question, I will probably not be able to answer it properly, since many physicists and astronomers are still debating the matter. I will try, however, to point out what are the certainties - relatively few in number - and the uncertainties - far more, for sure - that exist at this time in theories describing the large scale evolution of the Universe. I will emphasize, of course, the arguments for and against the use of a cosmological constant in such models, and I would like to make sure that my assistance gets a general view on the subject, in the way that I could understand it.

A Few Mathematical Considerations, or What Einstein Did

Since this is not a general relativity paper, I will present how Albert Einstein arrived to the conclusion that a cosmological constant is necessary for describing a static Universe in the simplest way possible. Imagine a sphere of radius R which has a mass M included inside its boundaries. Let m be a mass situated just on the boundary. We can then write:

, and , hence:


where a is the acceleration of mass m, G is the gravitational constant, and p is the pressure of the radiation, which contributes, along with matter density, to the overall density of the Universe.

(Think now at the sphere as our Universe, and at mass m as the farthest galaxy).

At a glance, the Universe cannot be static unless a is zero, so p= - 3 rho c2, which is a negative value. This is an unreasonable hypothesis, so Einstein introduced a repulsive force characterized by the cosmological constant to adjust this inconvenience and to straighten his model. I will not reproduce the calculations here, but just imagine that, instead of writing the energy conservation equation in the form:

E/m = V2/2 - GM/R, you introduce the term (- R2 ) in the right side. (1)

How Einstein has calculated it, the cosmological constant has the ultimate expression (in his static model):

.

The curvature radius of the universe can be further determined from that, as:


As I stated in the introduction, all the fundamental parameters characterizing the Universe are linked by equations. Ignoring the constants and the computation details, I will give the to-date accepted relations. Thus, the age of the universe is connected to the Hubble constant through:

t ~ 1/2H, in a radiation dominated universe, and

t ~ 2/3H, in a matter dominated universe.

The connection between H and the density of the universe (in Einstein - De Sitter model, but other models do not state anything significantly different) is:


It is a matter of philosophy to ask which of these parameters is crucial in understanding the others. They are all intimately linked. From this point on, we have to rely on what one can actually measure. The density and the age can only be estimated, unless indirectly determined. The cosmological constant is not even a certitude. Thus, the one that we eventually have to deal with is the Hubble constant, which can be calculated observing the red shift of the far galaxies. But there are plenty of controversies on its value also, ranging between 50 and 100 km/s/Mpc. One of the accepted values is 65 + 5 km/s/Mpc. This is also uncertain, since scientists do not agree on the methods of measuring it, and in some theories it is not consistent with the age of the Universe as determined from the cosmic microwave background radiation or globular clusters experiments (see the New Situations section).

For a better understanding of these issues, let's see what the different models agree and disagree on.

Models of Universe

Concerning the origin of the universe, contemporary views converge to two different hypotheses: the first, and the most popular, is the Big Bang theory, which states that the universe originated from a primordial explosion involving a singularity with infinitely high matter density and infinitely small size. The second, proposed recently, suggests a sinusoidal universe, that starts at a singularity similar to the Big Bang, but with finite density (supposedly the one of the nuclear material), expands for a while to a maximum state, than begins to contract to the singularity from which it was generated. The cycle will then proceed again. The age of such an universe makes no sense unless we calculate it from the most recent explosion. That is why, in the end, the two hypotheses are to converge after one disregards the philosophical implications.

According to the most recent opinions (Hoyle, Burbidge and Narlikar, March 1997) the accepted cosmological models can be classified in three large categories, each with their own general characteristics.

  1. the standard Big Bang cosmologies, with or without inflation.

Lambda or no Lambda ?

Albert Einstein, one of the most famous scientists of all times, thinking, probably, whether he should or should not introduce the cosmological constant.

They usually follow from Einstein's equations of general relativity. Although Einstein-de Sitter model cannot be included in this class, it was it that gave birth to all the models in this category. I will explain why.

When Einstein tried to picture a static universe, he noticed that he cannot do that unless he introduces the cosmological constant , as a "cosmic repulsion force" to compensate for the gravitational pressure of matter. However, this ab initio introduction in such a rigorous theory as the General Relativity did not please anyone (neither Einstein himself). So mathematicians all over the world began searching for other models that would cancel this constant.

Before I present what happened next, I would like to point out some of the characteristics of the Einstein - de Sitter model, for a better understanding of why it later proved inadequate.

Its main feature is that it requires the cosmological constant to picture a static universe. From the equations two solutions can be derived, from which just one was worked out by Einstein. According to it, the space has a positive curvature, but the time line is straight, so that no event will reoccur. The 2-dimensional analogy of this is the surface of a cylinder. Some years later, the Dutch mathematician Willem de Sitter discovered the second allowed solution, with an universe in which both space and time are curved. The analogy follows naturally - the surface of a sphere. The assumptions and the calculations of the two scientists were gathered in what is today called the Einstein-de Sitter model.

In 1922 however, the Russian mathematician Alexander Friedmann showed that such a static universe is in an unstable equilibrium. That is why, he argued, any slight change in the general parameters or in the local equilibrium states would generate discrepancies between the behaviors of the different parts of the universe. This result implies anisotropy. However, there was no reason, at that time, to suppose that the universe was anisotropic, and later measurements on cosmic microwave background radiation showed that it is clearly not the case.

Moreover, Friedmann came up with some other allowed solutions, in which, without contradicting General Relativity, the universe needs not to be static. Following from his equations, it can be either expanding or contracting with time. Einstein recognized the importance of this discovery, and called his theory on the cosmological constant the "greatest blunder of my life". Approximately in the same time, the American astronomer Edwin P. Hubble made a very important observation, that would confirm Friedmann's theory: he measured the redshift of several remote galaxies, and decided that they are receding from us with a velocity proportional to the distance to them. This happened in every direction he looked and, unless we suppose we have a privileged position in the universe (which is absurd), the result demonstrates that the universe is, indeed, expanding, and that there is no privileged point such as a center of inflation.

Models included in this category therefore do not need a cosmological constant, because they account for the inflation by supposing that it is driven mainly by the energy remained from the Big Bang, the explosion that initiated the universe. In such a model, the natural question is whether the expansion will ever cease. It can be calculated that , if the density is over a certain value, (called the critical density), then the inflation will stop, at a certain moment, and a contraction will begin. If rho<rhoc , the expansion will go for ever. If rho=rhoc, the universe will asymptotically tend to a steady size that it will never acquire. Rhoc was calculated, and its value is given by the following formula:

(2),

where H is the Hubble constant of the epoch. The main puzzle, again, is the uncertainty on the value of H. Hoyle, Burbidge and Narlikar, in their paper "On the Hubble constant and the cosmological constant" list an interesting table showing the different methods used to calculate H, and the consequent values. They range between 43+ 11 km/s/Mpc and 55+ 8 km/s/Mpc. However, they state that there is another current of opinion, led by G. de Vaucouleurs, which sustains a value of ~ 80-100 km/s/Mpc. As we can see, quite a divergence. Although the values are not so different, one must realize that a factor of 2 is involved, and therefore the density will change by a factor of 4. If this means a shift from 0.4rhoc to 1.6rhoc, it will make a lot of difference! By the way, the ratio rho/rhoc is called the density parameter, denoted Omega.

As a conclusion, this category includes closed (Omega>1), open (Omega<1) and flat (Omega=1) models. In each case the age of the universe is less than, greater than or equal to 2/3H. Latest measurements indicate a tH factor closer to 1 than to 2/3, so the most plausible hypothesis might be that we live in an universe that will expand forever. This is consistent with a determined of ~0.2. Of course, these data are not a certainty….


  1. the modified Big Bang cosmologies with cosmological constant.

They are more recent than 1) above, and appeared from the discrepancies between theoretical assumptions and experimental observations. What they basically do is to reintroduce the cosmological constant , much in the same way Einstein did, to adjust for the age of the universe. The result is that you can obtain arbitrarily large ages by modifying . The disadvantage, of course, is the difficulty of explaining the nature of such a parameter. The limiting model, however wrong, is Einstein-de Sitter's, that ultimately implies:

, and ,

R being the curvature radius of the universe (also see Mathematical Considerations section).

Lately, scientists managed to give some alternative explanations for the existence of , and they eased the consideration of models included in this category. I will discuss these interpretations in the last section, so I will waive them for now, asking the reader to forgive me.

  1. the quasi-steady state cosmology

The theoretical foundations of this cosmology lie on the creation of matter , by means of a scalar field m (Hoyle, Burbidge and Narlikar), that has ups and downs adding up to an exponential inflation over which a short-term oscillation is superimposed. This model requires a cosmological constant, which is, however, negative!

The main advantage of this cosmology is that the age of the universe makes sense only from one minimum of the oscillation to the next. S. Hoyle found that the time span from the last such minimum is approximately 14 billion years. This could be less than the age of the oldest objects. However, in this theory objects like that can easily be accommodated in.

Another feature of this category is that it views both the gravitational and the cosmological constant as derived from the mass field m, thus they are not anymore fundamental, but a given by the equations:

,

,

where m0 is a characteristic constant of the field m, and N is the number nucleons per Planck particle (an ideal object having the highest conceivable energy as an elementary particle). In the above equations h/2pi was assigned the value 1. The significance of all these defies the understanding of most non-specialists. What is really important for this paper is the value obtained for lambda, 7.10-57 cm-2, very small, consistent with other recent cosmologies.

Attractive as it is, the above theory follows closely from the steady-state model, proposed also by Hoyle. Unfortunately, this model proved to be wrong, because there is strong evidence that the universe was hotter and denser in the past, thus ruling out the net creation of matter. The quasi-steady state cosmology tries to make a trade-off between the author's old idea and the factual observations by saying that the mass curve is a sinusoid, thus epochs of creation being compensated by periods of loss. Yet, scientists do not seem to accept his arguments with ease.

Whatever the model of universe, the cosmological constant problem remains open. Most often the invoked reason for using it is matching the age of the universe with the actual measurements. However, finding a reasonable interpretation for such a term is the hardest part of the theories.

The New Situation

After Friedmann and Hubble, each one by his own means, proved the viability of an inflationary universe, the cosmological constant was, as I said, forgotten. Yet, no subsequent model (Lemaitre, 1933 and 1934, for example) succeeded in being consistent with the experimental facts. Before any other comments, let's see what were and still are these facts.

Firstly, there is the microwave background radiation. By measuring its characteristics, one can make predictions on the age of the universe, as this radiation, now with a temperature of 2.7K, is a remnant of the era immediately posterior to the Big Bang.

Secondly, one can determine the age of the oldest observable objects. How? A way is to measure their redshift, then apply Hubble's law. Another method is to study the old stars in the globular clusters nearby, supposed to have ended their main sequence evolution.

Thirdly, one may estimate the total mass of the matter, in order to come up with a value of the density of the universe (I will explain later why that became questionable recently). These are probably the most important tools, up-to-now, even though they are not the only ones.

Nevertheless, they are not infallible. Scientists still argue on the viability of these measurements, and it turns out that insignificant changes in the magnitude of the different quantities involved (such as in the Hubble constant, see previous section) may influence the overall picture of the universe. That is why so many different cosmologies are being described nowadays without being ruled out.

In 1975, Gunn and Tinsley gathered together the observations done at the time and concluded: "New data on the Hubble diagram, combined with constraints on the density of the universe and the ages of galaxies, suggest that the most plausible cosmological models have a positive cosmological constant, are closed, too dense to make deuterium in the Big Bang, and will expand forever." In the following years however, the first evidence for the existence of dark matter was found, and the "constraints on the density of the universe" were not tight anymore.

There might be up to 10 times more dark matter than luminous matter in the universe. The number is, of course, uncertain, but it can be estimated by kinematics calculations -- the mass of the galaxies measured by electromagnetic methods (optical, radio, X-rays, etc) is too small to account for the rotation curves of the gas and stars filling them. We have no other means to determine how much dark matter is up there "in the skies", since it does not emit any radiation that we could detect and analyze. But the fact itself is remarkable, because it pushes the limits on the actual density of the universe up to a point where many speculations can be made.

One other important point: in the last decades, the view on cosmology was driven to new horizons as more and more particle physicists began to work in the field. The quantum physics principles had to be added to the general relativity theory, thus making cosmology even "more complicated".

All these breakthroughs generated a race in describing models of universe. Some of the old ones were modified, new ones appeared. The most classical example for the former is the appearance of lambda in the left side of the energy equation (see Mathematical Considerations) -- equivalent to a negative lambda on the right side -- as a characteristic "strength" of the Big Bang explosion. By adjusting the cosmological constant and the density parameter (allowed by the discovery of dark matter) scientists were able to match this type of model with the experiments.

Others have said that, while the value of was Omega~ 0.1 up to the dark matter affair, the new data suggest a magnitude closer to 1. The simplest and most elegant solution was to assume a flat universe (Omega=1) with lambda=0 (more recent calculations tend to prove, however, that in fact the density parameter is still less than 1).

More sophisticated models were described as well. In Hoyle and Narlikar's cosmology, lambda is negative, but it does not introduce a change in the usual results because it depends on time in the same way as the density . Ernst Schmutzer (1984) made up a "5-dimensional projective space" cosmology, with a variable gravitational constant. Physicists like Baum, Hawking, Coleman, Sandage remained loyal to Big Bang models, and tried to argue for the cosmological term. Why? Because, indeed, even if the above mentioned invoke "various combinations of non-baryonic options, cold dark matter, hot dark matter", or quantum effects, none proved theoretically satisfactory enough and completely consistent with the observations. Lambda is still a necessity, and new interpretations are given. Unfortunately, the old problems still stand…...

Problems and Recent Interpretations


Figure 1 (Chaisson/McMillan - Astronomy Today)

Why is it so important to know the magnitude and the sign of the cosmological constant? The figure shows how this parameter can influence the dependence of the size of the universe on time. The allowed range, at least theoretically, is huge, even if we fix the today value of the Hubble constant. Hypotheses with no Big Bang or picturing a static universe become as reasonable as the older models, no matter what end they stated for the universe - Big Crunch or expansion forever.

In quantum physics there has been described an effect that is unimaginable in real life. It is called the "tunnel effect" and states that, when a particle is closed in a potential hole with finite walls, there still is a probability that the particle will pass through the walls, without being provided with external energy. This result is remarkable and proved to have profound implications in cosmology.

The universe is composed of stars, gas, intergalactic dust and, "recently", dark matter. Its density, at a maximum value, is on the order of 10-28 kg/m3. This roughly means 1 hydrogen atom in 10,000 liters of spaces. The volume of such an atom is about 10-30 m3. The rest is vacuum, which is supposed to have zero density.

However, quantum physicists managed to derive another strange, but beautiful result, now regarding the energy of vacuum. Since E=mc2 , this relates immediately with its density. It turns out that there is a small probability, much in the same way as in the tunnel effect, that vacuum will have a non-zero density! And since in this kind of physics the macroscopic state is a superposition of the microscopic states, the vacuum HAS, overall, a non-zero density.

This is the nicest interpretation that the cosmological constant is assigned nowadays, although scientists argue whether such quantum analogy is appropriate in the case. The way it was computed, the vacuum density is related to lambda through:

.

Some other scientists say that, because of the quantum effects, rhovac could be negative, so lambda can still have either sign. The most reasonable (and consistent) hypothesis though is a positive cosmological constant. Yet, the main argument against this assumption is that, to build up a model, lambda should be too large (i.e. larger than the to-date determinations, which I will discuss later). However, Steven Weinberg , Coleman and Hawking, and Venzo de Sabbata show that a value for the cosmological constant consistent with the behavior of the early phases universe (when the energy of vacuum was 1094 g/cm3!) could be derived.

Another elegant interpretation for the cosmological constant, somehow following from the above, is in terms of entropy. As the universe is expanding, its entropy is increasing, and it can be shown that the maximum value is 10120kB, (kB - the Boltzmann constant). The relation with lambda might be obscure, but I will say that S, explaining the previous result later.

Now, the problem. How big is the cosmological constant? The calculations show that it dominates the energy equation ((1), see Mathematical Considerations) when

, which means

If we assume a density of about 10-28 kg/m3, lambda> 5.10-38 m-2. This is, however, way too large relative to what values of the cosmological constant are determined on theoretical bases.

Indeed, at H=100km/s/Mpc, rhoc g/cm3 (see equation 2), and, assuming that the density of vacuum cannot exceed the critical density by more than 100 times, one yields a maximum of 10-52 m-2, a much smaller value than above. Moreover, cumulated quantum physics and general relativity arguments allow us to compute a minimal value of the cosmological term in the Plank era (the smallest unit of time after the Big Bang that can be analyzed, 10-43 s). I will give the formula not for memorizing , but because I think is remarkable, depending only on fundamental constants:

,

h is the Plank constant. This value is rather huge: 1070 m-2. So, its contribution at that time was maybe the only significant one. What is relevant though, is that the ratio lambda/ lambdaPl is the smallest dimensionless number in physics: ~10-120! For some reason, the cosmological constant, if ever existed, had drastically dropped during the evolution of the universe.

Now the question: if it is non-zero, why is it so small? Or, in other terms, if it is so small, why do we need it? These are the main points of view of the day regarding this long-lasting puzzle. The problem is that, even at a factor of 10-120, the vacuum density exceeds the matter density by at least 10-100 times! So it can significantly change our view of the universe.

The connection with entropy is the following: in the Plank era, the entropy of the universe was computed as 1060 kB. S goes as lambda-1/2, and since lambda/ lambdaPl is 10-120, the maximum entropy today can be 10120 kB. The significance of this limit is still an enigma for scientists, but one has to keep in mind that the entropy of a system is intimately linked to both the temperature and the degree of disorder of that system.

Using quantum approach, physicists have calculated an "allowed" value of the cosmological constant. It turned out that, in particle physics units, ! By the same method, in cosmological units, it has to be 1. The observed value, mentioned in the previous paragraph is, however, 120 orders of magnitude smaller! (because in particle physics units, a maximum value was calculated as 0.5 or so). This is another relevant observation, that might be there to tell us that we do not need Einstein's greatest blunder to model the world we happen to live in.

The puzzle goes on. New data and more accuracy in astronomical measurements are necessary to solve the cosmological constant problem and, subsequently, to find the best model of the evolution of our universe. There is no major, clear trend seen today among the specialists. A final result can hardly be predicted. This is nicely illustrated in an answer I got from a fourth year graduate student in string theory here, at Caltech, when I asked him what is his opinion about this issue. What he said is:

"I am a theoretician. I should say is zero. But nobody knows..."

References

  1. F. Hoyle, G. Burbridge, J. V. Narlikar - On the Hubble constant and the cosmological constant, Monthly Notices of the Royal Astronomical Society, vol. 286 (2), pp.173-182, March 1997
  2. A. Zichichi, V. de Sabbata, N. Sanchez - Gravitation and Modern Cosmology - The Cosmological Constant Problem, Ettore Majorana International Science Series, 1991
  3. M. Rowan-Robinson - Cosmology, 2nd edition, Claredon Press, Oxford, 1991
  4. P. J. E. Peebles - Principles of Physical Cosmology, Princeton University Press, 1993
  5. E. W. Kolb, M. S. Turner - The Early Universe, Addison-Wesley Publishing Company,1990
  6. Chaisson, McMillan - Astronomy Today, 2nd edition, Prentice Hall, 1996
  7. P. Coles, F. Lucchin - Cosmology, John Wiley and Sons Ltd., 1995
  8. P. J. E. Peebles - The Large Scale Structure of the Universe, Princeton University Press, 1980
  9. S. Weinberg - The First Three Minutes, Basic Books, Inc., 1977
  10. J. Leslie - Physical Cosmology and Philosophy, Macmillan Publishing Company, 1990
  11. J. Silk - A Short History of the Universe, Scientific American Library, 1994
  12. Scientific American - Cosmology +1, 1977
  13. http://dept.physics.upenn.edu/~myers/ASTR001/L40.html
  14. http://dept.physics.upenn.edu/~myers/ASTR001/L41.html
  15. http://astro.caltech.edu/academics/ay1