Saturday, May 3, 2008

Observing gas dynamics in distant galaxies

Astronomy has come a long way since Tycho Brahe's time, when astronomers used to spend nights tiring their eyes looking through the eyepiece of a refractor telescope. It has evolved immensely on the 20th century, and in particular the use of CCDs for astronomical research was a major revolution. Now we can quantify the incoming light of objects in the sky with an unprecedented level of detail.

The one case I want to discuss here is the recent invention of Integral Field Units, or IFUs. One of the main ways that astronomers study stars and galaxies is through the use of spectroscopy - that is, separating the incoming light into its different wavelengths, more or less like putting a prism at the end of the telescope.


Image courtesy of amateurspectroscopy.com

Through the use of spectroscopy, one can actually determine the elements that compose the observed object, since each group of lines is a unique signature of that element. Not only that, but we can study how fast the element is moving with respect to us (as, for example, in the case of redshifts). The observed dynamics can provide you with valuable information to understand how an object was formed.

Now, IFUs are even more sophisticated. By dividing the image in small pixels and getting spectra for each one, you can understand how the whole object is moving! For example, a number of astronomers study how hydrogen gas is moving around galaxies far, far away, and that tells them whether that gas is rotating around some center. For you to have an idea, each pixel has an angular size of less than a football field on the moon! Since the galaxy is far, far away (easily more than a billion light-years away, in fact), we can't see a football field, but rather each pixel in the IFU gives us spectral information of a region approximately 1,000 light-years across.

Still, that resolution is enough to tell us a great deal about the galaxy, in what might resemble a 3D picture. How fast is it rotating? Are there internal kinematics that differ from the dynamics of the galaxy as a whole? Is the movement of the gas ordered at all, or is it random? Or maybe through studying the motion of the gas we can realize that object is the result of two galaxies that collided some millions of years ago! This is all very important when we try to understand one simple question that has not yet found a satisfactory answer: how do galaxies (and our own Galaxy, for that matter) came to be what they are?

For more information, check out James Larkin's talk in mp3 or pdf format.

This post had a big help from KarĂ­n.

Friday, February 29, 2008

Redshift Periodicites of Quasars and Galaxies

When we talk about the overall structure of the Universe, we assume what's called the cosmological principle. The cosmological principle states that the Universe, on average, looks the same no matter where you are. The trick, of course, is that you have to average over a couple hundred million light years.

This week I read two papers, however, which tried to challenge this assumption. They didn't succeed, but I'll tell you about them, anyway. The two papers are Redshift Periodicity in Quasar Number Counts from Sloan Digital Sky Survey and Spatial Periodicity of Galaxy Number Counts from Fourier Analysis of the Large Scale Surveys of Galaxies in the Universe. Both papers are by J.G. Hartnett, a young earth creationist. His method was to take data sets from the SDSS (Sloan Digital Sky Survey) and 2dF GRS (2 degree Field Galaxy Redshift Survey) and look for periodicities, or harmonic components, in their radial distribution.

Now, the radial distribution is measured by their redshifts, which, by Hubble's law, are proportional to their distance. Hartnett looked and found such harmonic components in both the quasar and galaxy populations. He claims that the quasar harmonics show that quasar redshifts do not follow Hubble's law, but rather are an intrinsic property of quasars, themselves. At the same time, he claims that the galaxy harmonics show that galaxies sit on concentric spheres centred on the Milky Way. Now, if either of these claims were true then things would be difficult for the cosmological principle.

Lucky for the cosmological principle, the claims don't hold any water. Here's why: the quasar harmonics are spurious and the galaxy harmonics are explained (and predicted) by cosmological large-scale structure. First for the quasars, the harmonics are due to a selection effect. The SDSS cannot detect quasars at certain redshifts because they look just like stars, so we see artificial valleys in the radial distribution. Of course, because these are due to a selection effect, they can't have any implication on cosmology.

The galaxy harmonics that he found do appear to be real, but they certainly aren't due to concentric spherical shells of galaxies. He demonstrated this himself when he looked in two different directions and didn't see the same harmonics. How can they be spherical shells if they aren't spherically symmetric? They can't: that's how. Rather, the length scales of these harmonics matches rather well to the the length-scale of large-scale filaments seen in the Millennium Run, the largest cosmological computer simulation yet completed.

Hartnett concludes that the cosmological principle is in question. But a careful review of his papers shows that the only thing questionable is his research.

----------------
References:
Hartnett, J. G., arXiv:0711.4885v2
Hartnett, J. G., arXiv:0712.3833v2

Do Type 1A Supernovae prove Lambda > 0?

This paper by Michael Rowan-Robinson re-examines the analysis of supernova data that has been used to claim a positive cosmological constant.

Type 1a supernovae are white dwarf stars in binary systems which accrete enough mass to push them over the Chandrasekhar limit and explode. They are (more or less) standard candles that can be used out to relatively large distances. For this reason, supernova data can be used to measure cosmological parameters, in particular Omega_M (the matter density of the universe) and Omega_Lambda (the vacuum energy, or cosmological constant). Two groups focused on making these measurements in the 1990s: the Supernova Cosmology Project (SCP) and the High-Z Supernova Search Team (HZT). Around 1998-2000 the consensus agreement was that Omega_M = 0.3 and Omega_Lambda = 0.7.

This paper points out several potential problems with the supernova analysis. First, there may be a sample bias. The author shows that the distribution of all supernovae discovered after 1956 (excluding data points that are not well-measured) is different from the distribution used by the SCP. The SCP's data tends toward brighter supernovae at lower redshift. Since the relative dimness of more distant supernovae is what is being measured, this is an important problem. The SCP uses supernovae whose light curves were measured both at maximum brightness and 15 days later, as this allows them to adjust the absolute magnitude estimate. The author of this paper suggests that such the supernovae which were measured again 15 days later may be brighter than average. There is also a suggestion that there may be a systematic error in pre-1990 photographic photometry.

The 15-day measurement is made in order to adjust the estimate of the absolute magnitude of the supernova. It is believed that there is a correlation between the absolute magnitude of the supernova and the decay time of the light curve. There are several methods which are used to make this correction. The first is to estimate dM/dm(15) (the ratio of maximum brightness to the change in brightness over 15 days) for supernovae which have an independent distance measurement, and apply that correction to other data points. The second is to use multi-color light curve shapes (MLCS) to fit the light curves. The third is to apply a "stretch" factor to adjust for the decay times of the light curves. The author of this paper is skeptical of the way these adjustments have been done. He argues that any supernova that was only detected after the period of maximum brightness should be excluded from the analysis, because otherwise you have to use the same brightness-decay time relationship to extrapolated backwards, and this artificially reduces the scatter in the results. He also shows that the "stretch" factor method, used by the SCP, gives systematically different results from the other approaches.

The author of this paper criticizes the supernova teams for neglecting the effects of extinction within the galaxy which hosts the supernova. Both teams claim the extinction is negligible. The author believes this may not be valid because 1) the majority of supernovae take place in spiral galaxies, which have more extinction than ellipticals, and 2) star forming systems are more common as one looks backwards to z=1, so one would expect more extinction in galaxies hosting high-redshift supernovae. The HZT group give their estimates of extinction, which the author shows to be 0.22 magnitudes less than found by other methods.

Finally the author choses his own sample of supernova data, applies what he believes to be more consistent corrections for decay time and extinction, and finds that the significance of the dimming of distant supernova (which is the evidence for a positive cosmological constant) is less significant than other people have found. Making a plot of the Hubble diagram, including gravitational lensing and S-Z clusters, he finds a best-fit value of Omega_M = 0.81 +/- 0.12. His analysis does not rule out a cosmological constant, but he concludes that the evidence for it is relatively weak. He argues that since there is more motivation for an Omega_M = 1 universe, we should not be so quick to rule it out.

Thursday, February 28, 2008

A New Astronomy Blog

Hello and welcome to our new Astronomy Blog. A collaboration of the astrophysics grad students of Caltech, this blog will present current astronomy research in an easy-to-understand and concise way.

Check back often or subscribe to our RSS feed. Have a question? Ask and we'll reply.

And, most importantly: enjoy the science!