VIR-NOT-PIS-1390-113

IFUP-TH 58/97

March 1998

High-Energy Physics with Gravitational-Wave Experiments Michele Maggiore

INFN, sezione di Pisa, and
Dipartimento di Fisica, Università di Pisa,

piazza Torricelli 2, I-56100 Pisa, Italy

We discuss the possible relevance of gravitational-wave (GW) experiments for physics at very high energy. We examine whether, from the experience gained with the computations of various specific relic GW backgrounds, we can extract statements and order of magnitude estimates that are as much as possible model-independent, and we try to distinguish between general conclusions and results related to specific cosmological mechanisms. We examine the statement that the Virgo/LIGO experiments probe the Universe at temperatures GeV (or timescales sec) and we consider the possibility that they could actually probe the Universe at much higher energy scales, including the typical scales of grand unification, string theory and quantum gravity. We consider possible scenarios, depending on how the inflationary paradigm is implemented. We discuss the prospects for detection with present and planned experiments. In particular, a second Virgo interferometer correlated with the planned one, and located within a few tens of kilometers from the first, could reach an interesting sensitivity for stochastic GWs of cosmological origin.

## I Introduction

The energy range between the grand unification scale GeV and the Planck scale GeV is crucial for fundamental physical questions and for testing current ideas about grand unification, quantum gravity, string theory. Experimental results in this energy range are of course very difficult to obtain. From the particle physics point of view, there are basically two important experimental results that can be translated into statements about this energy range (see e.g. [1] for recent reviews): (i) the accurate measurement of gauge coupling constants at LEP that, combined with their running with energy, shows the unification of the couplings at the scale , provided that the running is computed including the supersymmetric particles in the low energy spectrum. And (ii) the negative results on proton decay; the lower limit on the inverse of the partial decay width for the processes and are yr and yr, respectively, and imply a lower bound GeV, which excludes non-supersymmetric SU(5) unification. Further improvement is expected from the SuperKamiokande experiment, which should probe lifetimes yr.

From the cosmological point of view, informations on this energy range can only come from particles which decoupled from the primordial plasma at very early time. Particles which stay in thermal equilibrium down to a decoupling temperature can only carry informations on the state of the Universe at . All informations on physics at higher energies has in fact been obliterated by the successive interactions.

The condition for thermal equilibrium is that the rate of the processes that mantain equilibrium be larger than the rate of expansion of the Universe, as measured by the Hubble parameter [2]. The rate is given by where is the number density of the particle in question, and for massless or light particles in equilibrium at a temperature , ; is the typical velocity and is the cross-section of the process. Consider for instance the weakly interacting neutrinos. In this case the equilibrium is mantained, e.g., by electron-neutrino scattering, and at energies below the mass where is the Fermi constant and is the average energy squared. The Hubble parameter during the radiation dominated era is related to the temperature by . Therefore [2]

(1) |

Even the weakly interacting neutrinos, therefore, cannot carry informations on the state of the Universe at temperatures larger than approximately 1 MeV. If we repeat the above computation for gravitons, the Fermi constant is replaced by Newton constant (we always use units ) and at energies below the Planck mass

(2) |

The gravitons are therefore decoupled below the Planck scale. (At
the Planck scale the above estimate of the cross section
is not valid and nothing can be said without a quantum theory of
gravity).
It follows that relic gravitational waves are a potential source of
informations on very high-energy physics. Gravitational waves
produced in the very early Universe have not lost memory of
the conditions in which they have been produced, as it happened to
all other particles, but still retain in their spectrum, typical
frequency and intensity, important informations on the state of the
very early Universe, and therefore on
physics at correspondingly high energies,
which cannot be accessed experimentally in any
other way.
It is also clear that the property of gravitational waves that
makes them so interesting,
i.e. their extremely small cross section, is also
responsible for the difficulties of the experimental
detection.^{1}^{1}1Thinking in terms of cross-sections, one is lead
to ask how comes that
gravitons could be detectable altogheter, since the graviton-matter
cross section
is smaller than the neutrino-matter cross section,
at energies below the -mass, by a
factor and
neutrinos are
already so difficult to detect. The answer is that gravitons are
bosons, and therefore their occupation number per cell of phase space
can be
; we will see below that in interesting cases, in the relic
stochastic background we can have
or larger, and the squared amplitude
for exciting a given mode ot the detector grows as .
So, we will never really detect gravitons, but
rather classical gravitational waves.
Neutrinos, in contrast, are fermions and for them .

With the very limited experimental informations that we have on the very high energy region,

In this paper we discuss whether, from the experience gained with various specific computations of relic backgrounds, it is possible to extract statements or order of magnitude estimates which are as much as possible model-independent. These estimates would constitute a sort of minimal set of naive expectations, that could give some orientation, independently of the uncertainties and intricacies of the specific cosmological models. We discuss typical values of the frequencies involved and of the expected intensity of the background gravitational radiation, and we try to distinguish between statements that are relatively model-independent and results specific to given models.

The paper is written having in mind a reader interested in gravitational-wave detection but not necessarily competent in early Universe cosmology nor in physics at the string or Planck scale, and a number of more technical remarks are relegated in footnotes and in an appendix. We have also tried to be self-contained and we have attempted to summarize and occasionally clean up many formulas and numerical estimates appearing in the literature. The organization of the paper is as follows. In sect. 2 we introduce the variables most commonly used to describe a stochastic background of gravitational waves. We give a detailed derivation of the relation between exact formulas for the signal-to-noise ratio, and approximate but simpler characterizations of the characteristic amplitude and of the noise. The former variables are convenient in theoretical computations while the latter are commonly used by experimentalists, so it is worthwhile to understand in some details their relations. In sect. 3 we apply these formulas to compute the sensitivity to a stochastic background that could be obtained with a second Virgo interferometer correlated with the first, and we compare with various others detectors. We find that in the Virgo-Virgo case the noise which would give the dominant limitation to the measurement of a stochastic background is the mirror thermal noise, and we give the sensitivity for different forms of the relic GW spectrum. In sect. 4 and 5 we discuss estimates of the typical frequency scales. We examine the statements leading to the conclusion that Virgo/LIGO will explore the Universe at temperatures GeV (sect. 4), and in the appendix we discuss some qualifications to this statement. In sect. 5 we discuss the possibility to reach much higher energy scales, including the typical scales of grand unification and quantum gravity. In sect. 6 we discuss different scenarios, depending on how the inflationary paradigm is implemented. Characteristic values of the intensity of the spectrum and existing limits and predictions are discussed in sect. 7. Sect. 8 contains the conclusions.

## Ii Definitions

### ii.1 and the optimal SNR

The intensity of a stochastic background of gravitational waves (GWs) can be characterized by the dimensionless quantity

(3) |

where is the energy density of the stochastic background of gravitational waves, is the frequency () and is the present value of the critical energy density for closing the Universe. In terms of the present value of the Hubble constant , the critical density is given by

(4) |

The value of is usually written as km/(sec–Mpc), where parametrizes the existing experimental uncertainty. Ref. [3] gives a value . In the last few years there has been a constant trend toward lower values of and typical estimates are now in the range or, more conservatively, . For instance ref. [4], using the method of type IA supernovae, gives two slightly different estimates and . Ref. [5], with the same method, finds and ref. [6], using a gravitational lens, finds . The spread of values obtained gives an idea of the systematic errors involved.

It is not very convenient to normalize
to a quantity, ,
which is uncertain: this uncertainty would appear
in all the subsequent formulas, although it has nothing to do with the
uncertainties on the GW background. Therefore, we
rather characterize the stochastic GW background with
the quantity , which is independent of . All theoretical
computations of a relic GW spectrum are actually computations of
and are independent of the
uncertainty on . Therefore the result of these computations is
expressed in terms of , rather than of
.^{2}^{2}2This simple point has occasionally been missed in the
literature, where one can find the statement that, for small values of
, is larger and therefore easier to detect. Of course, it
is larger only because it has been normalized using a smaller
quantity.

To detect a stochastic GW background the optimal strategy consists in performing a correlation between two (or more) detectors, since, as we will discuss below, the signal will be far too low to exceed the noise level in any existing or planned single detector (with the exception of the space interferometer LISA, see below). The strategy has been discussed in refs. [7, 8, 9, 10], and a clear review is ref. [11]. Let us recall the main points of the analysis. The output of any single detector is of the form , where labels the detector, and the output is made up of a noise and possibly a signal . In the typical situation, . We can correlate the two outputs defining

(5) |

where is the total integration time (e.g. one year) and a filter function. If the noises in the two detectors are uncorrelated, the ensemble average of the Fourier components of the noise satisfies

(6) |

The above equation defines the
functions , with dimensions Hz.
The factor is conventionally inserted in the definition
so that the total noise power is obtained integrating over the
physical range , rather than from to
.
The noise level of the detector labelled by is therefore measured by
, with dimensions
Hz. The function is known as the square spectral noise
density.^{3}^{3}3Unfortunately there is not much agreement
about notations in the literature. The spectral noise density,
that we denote by following e.g. ref. [9],
is called in ref. [11]. Other authors
use the notation , which we instead reserve
for the spectral density of the signal. To make things worse, is
sometime defined with or without the factor 1/2 in eq. (6).

As discussed in refs. [7, 8, 9, 10, 11], in the limit , for any given form of the signal, i.e. for any given functional form of , it is possible to find explicitly the filter function which maximizes the signal-to-noise ratio (SNR). In the case of L-shaped interferometers the corresponding value of the optimal SNR turns out to be (see e.g. ref. [11], eq.(43))

(7) |

We have taken into account the fact that what has been called in eq. (5) is quadratic in the signals and, with usual definitions, it contributes to the SNR squared. This differs from the convention used in ref. [11]. The function is called the overlap function. It takes into account the difference in location and orientation of the two detectors. It has been computed for the various pairs of LIGO1, LIGO2, Virgo and GEO detectors [9]. For detectors very close and parallel, . Basically, cuts off the integrand in eq. (7) at a frequency of the order of the inverse separation between the two detectors. For the two LIGO detectors, this cutoff is around 60 Hz. We will discuss in sect. 2C, where we will also comment on the modifications needed for different geometries.

In principle the expression for the SNR, eq. (7), is all that we need in order to discuss the possibility of detection of a given GW background. However it is useful, for order of magnitude estimates and for intuitive understanding, to express the SNR in terms of a characteristic amplitude of the stochastic GW background and of a characteristic noise level, although, as we will see, the latter is a quantity that describes the noise only approximately, in contrast to eq. (7) which is exact. We will introduce these quantities in the next two subsections.

### ii.2 The characteristic amplitude

A stochastic GW at a given point can be expanded, in the transverse traceless gauge, as (we follow the notations of ref. [11], app.A)

(8) |

where . is a unit vector representing the direction of propagation of the wave and . The polarization tensors can be written as and , with unit vectors ortogonal to and to each other. With these definitions, . For a stochastic background, assumed to be isotropic, unpolarized and stationary (see [11, 12] for a discussion of these assumptions) the ensemble average of the Fourier amplitudes can be written as

(9) |

where . The function defined by the above equation has dimensions Hz and satisfies . The factor 1/2 is conventionally inserted in the definition of in order to compensate for the fact that the integration variable in eq. (8) ranges between and rather than over the physical domain . The factor is inserted so that, integrating the left-hand side over and over , we get . With this normalization, is therefore the quantity to be compared with the noise level defined in eq. (6). Using eqs. (8,9) we get

(10) |

We now define the characteristic amplitude from

(11) |

Note that is dimensionless, and represents a characteristic value of the amplitude, per unit logarithmic interval of frequency. The factor of two on the right-hand side of eq. (11) is inserted for the following reason. The response of the detector to a single wave with amplitudes is of the form (Ref. [13], eqs. (26) and (103)) . For an interferometer, where is the difference in arm lengths. The function is also called the gravitational wave strain acting on the detector. The functions are known as detector pattern functions, and . They depend on three angles, that determine the direction of arrival of the GW and its polarization. For any quadrupole-beam-pattern GW detector we have (ref.[13], pg. 369) and , where

(12) |

denotes an average over the direction of propagation of the wave, , and over the angle that gives the preferred frame where the wave in the transverse traceless gauge takes the simple form , (see [13], pg. 367). This average should not to be confused with which is the time average, i.e., in Fourier space, the ensemble average of eq. (9). Therefore, . In a unpolarized stochastic background (where is the ensemble average of eq. (9)) and therefore, since , the two terms in , after averaging over the angles and over time, give the same contribution. This motivates the factor of two in the definition of , eq. (11). For the same reason, we could insert a factor in the definition of . However, what is really meaningful is not a characterization of the signal nor of the noise level separately, but rather the signal-to-noise ratio discussed in the previous subsection. We are trying to express the SNR in terms of a ratio of two quantities, . So, we are free to arbitrarily move factors from to as long as is unchanged. However, it is convenient to collect in all factors related to the source, and in all factors related to the detectors. We do not insert a factor in the definition of , and therefore, automatically, we will obtain a factor in , see eqs. (23,25) below. The same is true for numerical factors like the factor of two just discussed. We could as well have neglected it in and we would find an additional factor in . We will discuss in the next subsection.

Comparing eqs. (10) and (11), we get

(13) |

We now wish to relate and . The starting point is the expression for the energy density of gravitational waves, given by the -component of the energy-momentum tensor. The energy-momentum tensor of a GW cannot be localized inside a single wavelength (see e.g. ref.[15], sects. 20.4 and 35.7 for a careful discussion) but it can be defined with a spatial averaging over several wavelenghts:

(14) |

For a stochastic background, the spatial average over a few wavelengths is the same as a time average at a given point, which, in Fourier space, is the ensemble average performed using eq. (9). We therefore insert eq. (8) into eq. (14) and use eq. (9). The result is

(15) |

so that

(16) |

Comparing eqs. (16) and (13) we get the important relation

(17) |

or, dividing by the critical density ,

(18) |

Inserting the numerical value of , we find (ref. [13], eq. (65))

(19) |

Using eqs. (13,18) we can also write . Using this relation, and defining , eq. (7) can be written in a more transparent form,

(20) |

The physical reason for the appearance of the factor 2/5 in the above formula will be clear from eq. (24) below. (The number 2/5 is specific to L-shaped interferometers, see below). The factor of in front of the integral can instead be understood from .

Finally, we mention another useful formula which expresses in terms of the number of gravitons per cell of the phase space, . For an isotropic stochastic background depends only on the frequency , and . Therefore , and

(21) |

As we will discuss below, to be observable at the LIGO/Virgo interferometers, we should have at least between 1 Hz and 1 kHz, corresponding to of order at 1 kHz and at 1 Hz. A detectable stochastic GW background is therefore exceedingly classical, .

### ii.3 The characteristic noise level

We have seen in the previous section that there is a very natural definition of the characteristic amplitude of the signal, given by , which contains all the informations on the physical effects, and is independent of the apparatus. We can therefore associate to a corresponding noise amplitude , that embodies all the informations on the apparatus, defining in terms of the optimal SNR.

If, in the integral giving the optimal SNR, eq. (7) or eq. (20), we consider only a range of frequencies such that the integrand is approximately constant, we can write

(22) |

The right-hand side of eq. (22) is proportional to , and we can therefore define equating the right-hand side of eq. (22) to , so that

(23) |

For -shaped interferometers, the overlap function is defined in terms of the detector pattern functions , where labels the detector, as [9]

(24) |

If the separation between the two detectors is very small, and if the detectors have the same orientation, so that then (averaging also over the angle discussed in the previous subsection)

(25) |

For L-shaped interferometers (see ref. [13], eq. (110), or ref. [14]). We see that the factor in the definition of , eq. (24), has been inserted so that for parallel detectors at the same site . Therefore, the factor in eq. (23) measures the increase in the noise level due to the fact that stochastic GWs hit the detectors from all directions, rather than just from the direction where the sensitivity is optimal, while measures the decrease in sensitivity due to the detectors separation.

For a generic geometry, the factors 5 in the above formulas are replaced by . In the special case (detectors at the same site) the factor in eq. (23) becomes therefore , and we recover eq. (66) of ref. [13] (for comparison, note that the quantity denoted in [13] is called here ).

From the derivation of eq. (23) we can understand the limitations implicit in the use of . It gives a measure of the noise level only under the approximation that leads from eq. (20), which is exact (in the limit ), to eq. (22). This means that must be small enough compared to the scale on which the integrand in eq. (20) changes, so that is approximately constant. In a large bandwidth this is non trivial, and of course depends also on the form of the signal; for instance, if is flat, then . For accurate estimates of the SNR at a wideband detector there is no substitute for a numerical integration of eq. (7) or eq. (20). However, for order of magnitude estimates, eq. (19) for and eq. (23) for are simpler to use, and they have the advantage of clearly separating the physical effect, which is described by , from the properties of the detectors, that enter only in .

Eq. (23) also shows very clearly the advantage of correlating two detectors compared with the use of a single detector. With a single detector, the minimum observable signal, at SNR=1, is given by the condition . This means, from eq. (13), a minimum detectable value for given by . The superscript 1d reminds that this quantity refers to a single detector. From eq. (23) we find for the minimum detectable value with two interferometers in coincidence, ,

(26) |

Of course, the reduction factor in the noise level is larger if the integration time is larger, and if we increase the bandwidth over which a useful coincidence (i.e. ) is possible. Note that is quadratic in , so that an improvement in sensitivity by two orders of magnitudes in means four orders of magnitude in .

## Iii Application to various detectors

Single detectors. To better appreciate the importance of correlating two detectors, it is instructive to consider first the sensitivity that can be obtained using only one detector. In this case a hypothetical signal would manifest itself as an eccess noise, and should therefore satisfy . Using eqs. (13,18) we write in terms of , and we write . For the minimum detectable value of we get

(27) |

Fig. 1, taken from ref. [16], shows the sensitivity of the planned Virgo interferometer, i.e., the quantity as a function of . We see that Virgo, used as a single detector, can reach a minimum detectable value for of order or at most a few times , at 100Hz. Unfortunately, this is not an interesting sensitivity level; as we will discuss in sect. 7, an interesting sensitivity level for is at least of order . To reach such a level with a single Virgo interferometer we need, e.g., at 100 Hz, or at 1 kHz. We see from fig. 1 that such small values of are very far from the sensitivity of first generation interferometers, and are in fact even well below the limitation due to quantum noise.

If we consider instead the NAUTILUS detector, which has a target sensitivity and operates at two frequencies in the kHz region (907 Hz and 923.32 Hz) [17], we find . The analysis using existing EXPLORER data (from 1991 and from 1994) and more recent NAUTILUS data gives a bound on [18].

Thus, we see that in these cases we cannot get a significant bound on . A very interesting sensitivity, possibly even of order , could instead be reached with a single detector, with the planned space interferometer LISA [19], at Hz.

Note also that, while correlating two detectors the SNR improves with integration time, see eq. (7), this is not so with a single detector. So, independently of the low sensitivity, with a single detector it is conceptually impossible to tell whether an eccess noise is due to a physical signal or is a noise of the apparatus that has not been properly accounted for. This might not be a great problem if the SNR is very large, but certainly with a single detector we cannot make a reliable detection at SNR of order one, so that the above estimates (which have been obtained setting SNR=1) are really overestimates.

Virgo-Virgo.
We now consider the
sensitivity that could be obtained at Virgo if the planned
interferometer were correlated with a second identical interferometer
located at a few tens of kilometers from the first, and with the same
orientation.^{4}^{4}4Correlations between two interferometers have
already been carried out using prototypes operated by the groups
in Glasgow and at the
Max Planck Institute for Quantum Optics, with an effective coicident
observing period of 62 hours [20]. Although the sensitivity of
course is not yet significant, they demonstrate the possibility of
making long-term coincident observations with interferometers. This
distance would be optimal from the point of view of the stochastic
background, since it should be sufficient to decorrelate local noises
like, e.g., the seismic noise and local electromagnetic disturbances,
but still the two interferometers would
be close enough so that the overlap function does not cut off the high
frequency range, as it happens, instead, correlating the two LIGOs.

Let us first give a rough estimate of the sensitivity using . From fig. 1 we see that we can take, for our estimate, over a bandwidth 1 kHz. Using 1 yr, eq. (23) gives

(28) |

Requiring for instance SNR=1.65 (this corresponds to confidence level; a more precise discussion of the statistical significance, including the effect of the false alarm rate can be found in ref. [12]) gives an estimate for the minimum detectable value of ,

(29) |

This suggests that correlating two Virgo interferometers we can detect a relic spectrum with at SNR=1.65, or at SNR=1. Compared to the case of a single interferometer with SNR=1, eq. (27), we gain five orders of magnitude. As already discussed, to obtain a precise numerical value one must however resort to eq. (7). This involves an integral over all frequencies, (that replaces the somewhat arbitrary choice of made above) and depends on the functional form of . If for instance is independent of the frequency, using the numerical values of plotted in fig. 1 (see ref. [16]) and performing the numerical integral we get for the minimum detectable (we give the result for a generic value of the SNR and of the integration time)

(30) |

We see that this number is quite consistent with the approximate estimate (29), and with the value reported in ref. [21]. Stretching the parameters to SNR=1 ( c.l.) and years, the value goes down at . It is interesting to note that the main contribution to the integral comes from the region 100 Hz. In fact, neglecting the contribution to the integral of the region 100 Hz, the result for changes only by approximately . Also, the lower part of the accessible frequency range is not crucial. Restricting for instance to the region 20 Hz 200 Hz, the sensitivity on degrades by less than , while restricting to the region 30 Hz 100 Hz, the sensitivity on degrades by approximately . Then, from fig. 1 we conclude that by far the most important source of noise for the measurement of a flat stochastic background is the thermal noise. In particular, the sensitivity to a stochastic background is limited basically by the mirror thermal noise, which dominates in the region 40 Hz

The sensitivity depends however on the functional form of . Suppose for instance that in the Virgo frequency band we can approximate the signal as

(31) |

For we find that the spectrum is detectable at SNR=1.65 if . For we find (taking 5Hz as lower limit in the integration) . Note however that in this case, since , the spectrum is peaked at low frequencies, and . So, both for increasing or decreasing spectra, to be detectable must have a peak value, within the Virgo band, of order a few in the case , while a constant spectrum can be detected at the level . Clearly, for detecting increasing (decreasing) spectra, the upper (lower) part of the frequency band becomes more important, and this is the reason why the sensitivity degrades compared to flat spectra, since for increasing or decreasing spectra the maximum of the signal is at the edges of the accessible frequency band, where the interferometer sensitivity is worse.

LIGO-LIGO. The two LIGO detectors are under construction at a large distance from each other, km. This choice optimizes the possibility of detecting the direction of arrival of GWs from astrophysical sources, but it is not optimal from the point of view of the stochastic background, since the overlap functions cuts off the integrand in eq. (7) around 60 Hz. The sensitivity to a stochastic background of the LIGO-LIGO detectors has been computed in refs. [7, 8, 9, 11, 12]. The result, for independent of , is

(32) |

for the initial LIGO. The advanced LIGO aims at . These numbers are given at c.l. in [11], and a detailed analysis of the statistical significance is given in [12].

Resonant masses and Resonant mass-Interferometer. Resonant mass detectors includes bars like NAUTILUS, EXPLORER and AURIGA (see e.g. refs. [17, 22] for reviews). Spherical [23, 24] and truncated icosahedron (TIGA) [25] resonant masses are also being developed or studied. The correlation between two resonant bars and between a bar and an interferometer has been considered in refs. [10, 21, 26, 27, 28]. The values quoted in ref. [10] are as follows (using 1 year of integration time and SNR=1; the mimimum detectable value of grows as , so the minimum detectable value at SNR=1.65 is about a factor of 3 larger). Correlating the AURIGA and NAUTILUS detectors, with the present orientation, we can detect . Reorienting the detectors for optimal correlation, which is technically feasible, we can reach . For AURIGA-VIRGO, with present orientation, , and for NAUTILUS-VIRGO, . A three detectors correlation AURIGA-NAUTILUS-VIRGO, with present orientation, would reach , and with optimal orientation . Although the improvement in sensitivity in a bar-bar-interferometer correlation is not large compared to a bar-bar or bar-interferometer correlation, a three detectors correlation would be important in ruling out spurious effects [10]. Preliminary results on a NAUTILUS-EXPLORER correlation, using 12 hours of data, have been reported in [29], and give a bound .

Using resonant optical techniques, it is possible to improve the sensitivity of interferometers at special values of the frequency, at the expense of their broad-band sensitivity. Since bars have a narrow band anyway, narrow-banding the interferometer improves the sensitivity of a bar-interferometer correlation by about one order of magnitude [21].

While resonant bars have been taking data for years, spherical detectors are at the moment still at the stage of theoretical studies (although prototypes might be built in the near future), but could reach extremely interesting sensitivities. In particular, two spheres with a diameter of 3 meters, made of Al5056, and located at the same site, could reach a sensitivity [10]. This figure improves using a more dense material or increasing the sphere diameter, but it might be difficult to build a heavier sphere. Another very promising possibility is given by hollow spheres [24]. The theoretical studies of ref. [24] suggest a minimum detectable value at Hz.

## Iv The energy scales probed by relic gravitational waves

Let us consider the standard Friedmann-Robertson-Walker (FRW) cosmological model, consisting of a radiation-dominated (RD) phase followed by the present matter-dominated (MD) phase, and let us call the FRW scale factor. The RD phase goes backward in time until some new regime sets in. This could be an inflationary epoch, e.g. at the grand unification scale, or the RD phase could go back in time until Planckian energies are reached and quantum gravity sets in, i.e., until s. If the correct theory of quantum gravity is provided by string theory, the characteristic mass scale is the string mass which is somewhat smaller than the Planck mass and is presumably in the GeV region, and the corresponding characteristic time is therefore one or two orders of magnitude larger than . The transition between the RD and MD phases takes place at , when the temperature of the Universe is of the order of only a few eV, so we are interested in graviton production which takes place well within the RD phase, or possibly at Planckian energies.

A graviton produced with a frequency at a time , within the RD phase has today () a red-shifted frequency given by . To compute the ratio one uses the fact that during the standard RD and MD phases the Universe expands adiabatically. The entropy per unit comoving volume is , where counts the effective number of species [2]. In the standard model, at GeV, becomes constant and has the value , while today [2] and K [30]. Using one finds [31]

(33) |

The first point to be addressed is what is the characteristic value of
the frequency produced at time , when the temperature was
. One of the relevant
parameters in this estimate
is certainly the Hubble parameter at time of production,
. This comes
from the fact that is the size
of the horizon at time . The horizon size, physically, is the
length scale beyond which causal microphysics cannot operate (see
e.g. [2], ch. 8.4), and
therefore, for causality reasons, we expect that the characteristic
wavelength
of gravitons or any other particles produced at time will
be of order or smaller.^{6}^{6}6On
a more technical side, the deeper reason has
really to do with the invariance of general relativity
under coordinate transformations, combined with the expansion over a
fixed, non-uniform,
background. Consider for instance a scalar field and
expand it around a given classical configuration, . Under a general coordinate transformation
, by definition a scalar field transforms as
. However, when we expand around a given background, we keep
its functional form fixed and therefore under ,
, which for a non-constant field configuration,
is different from . It follows that
the perturbation is not a scalar under general
coordinate transformations, even if was a scalar.
The effect becomes important for the Fourier components of with a wavelength comparable or greater than the variation scale
of the background . (We are discussing a scalar field for
notational simplicity, but of course the same holds for the metric
tensor ). In a homogeneous FRW background the only
variation is temporal, and its timescale is given by the
. Therefore modes with wavelength greater than are in
general plagued by gauge artefacts. This problem
manifests itself, for instance, when computing density fluctuations in
the early Universe. In this case one finds spurious modes which can be
removed with an appropriate gauge choice, see e.g. ref. [2],
sect. 9.3.6 or ref. [32].

Therefore, we write . The above argument suggests . During RD, . The contribution to from a single species of relativistic particle with internal states (helicity, color, etc.) is for a boson an for a fermion. Taking into account that the i-th species has in general a temperature if it already dropped out of equilibrium, we can define a function from . Then [2]

(34) |

The sum runs over relativistic species. This holds if a species is in thermal equilibrium at the temperature . If instead it does not have a thermal spectrum (which in general is the case for gravitons) we can still use the above equation, where for this species does not represent a temperature but is defined (for bosons) from , where is the energy density of this species. The quantity used before for the entropy is given by the same expression as , with replaced by . We see that both and give a measure of the effective number of species. For most of the early history of the Universe, , and in the standard model at 300 GeV they have the common value , while today [2]. Therefore

(35) |

and, using , eq. (33) can be written as [31]

(36) |

This simple equation allows to understand a number of important points concerning the energy scales that can be probed in GW experiments. The simplest estimate of corresponds to taking in eq. (36) [11]. In this case, we would find that a graviton observed today at the frequency 1Hz was produced when the Universe had a temperature GeV. Using the relation between time and temperature in the RD phase,

(37) |

we find that this corresponds to a production time sec, and at time of production this graviton had an energy 0.3 MeV. At 100Hz we get sec, GeV and 3 GeV. These would be, therefore, the scales relevant to Virgo and LIGO. For a frequency Hz, relevant to LISA, we would get instead sec, GeV.

However, the estimate , or , can sometimes be incorrect even as an order of magnitude estimate. In the Appendix we illustrate this point with two specific examples, one in which the assumption turns out to be basically correct, and one in which it can miss by several orders of magnitudes. Both examples will in general illustrate the fact that the argument does not involve only kinematics, but also the dynamics of the production mechanism.

From eq. (36) we see that the temperatures of the early Universe explored detecting today relic GWs at a frequency are smaller by a factor approximately equal (for constant ) to , compared to the estimate with . Equivalently, a signal produced at a given temperature could in principle show up today in the Virgo/LIGO frequency band when a naive estimate with suggests that it falls at lower frequencies.

There is however another effect, which instead gives hopes of exploring the Universe at much higher temperatures than naively expected, using GW experiments. In fact, the characteristic frequency that we have discussed is the value of the cutoff frequency in the graviton spectrum. Above this frequency, the spectrum decreases exponentially [33], and no signal can be detected. Below this frequency, however, the form of the spectrum is not fixed by general arguments. Thermal spectra have a low frequency behaviour , as we read from eq. (21) inserting a Bose-Einstein distribution for , so that, at low , . However, below the Planck scale gravitons interact too weakly to thermalize, and there is no a priori reason for a dependence. The gravitons will retain the form of the spectrum that they had at time of production, and this is a very model dependent feature. However, from a number of explicit examples and general arguments that we will discuss below, we learn that spectra flat or almost flat over a large range of frequencies seem to be not at all unusual.

This fact has potentially important consequencies. It means that, even if a spectrum of gravitons produced during the Planck era has a cutoff at frequencies much larger than the Virgo/LIGO frequency band, still we can hope to observe in the 10Hz–1kHz region the low-frequency part of these spectra. In the next subsection we will therefore discuss what signals can be expected from the Universe at extremely high (Planckian, string or GUT) temperatures.

## V Toward the Planck era?

The scale of quantum gravity is given by the Planck mass, related to Newton constant by . More precisely, since in the gravitational action enters the combination , we expect that the relevant scale is the reduced Planck mass GeV. Using eq. (36) with and gives

(38) |

The dependence on is rather weak because of the power 1/6 in eq. (36). For , increases by a factor relative to . For