Cosmology Is in Crisis — But Not for the Reason You May Think

Science is advancing rapidly. We are eradicating diseases, venturing further into space and discovering a growing zoo of subatomic particles. But cosmology — which is trying to understand the evolution of the entire universe using theories that work well to describe other systems — is struggling to answer many of its most fundamental questions.

We still have no idea what the vast majority of the universe is made of. We struggle to understand how the Big Bang could suddenly arise from nothing or where the energy for “inflation,” a very short period of rapid growth in the early universe, came from. But despite these gaps in knowledge, it is actually human nature — our tendency to interpret data to fit our beliefs — that is the biggest threat to modern cosmology.

Cosmological concerns

The picture of the cosmos we now have is one that is dominated by two components, dark matter and dark energy. These account for 95% of the energy content of the universe, yet we do not know what they are. This is an issue for cosmologists and indeed is rightly lauded as one of the most important problems in physics — explanations for the nature of dark energy range from proposals to scrap Einstein’s theory of relativity, the addition of a new fundamental field of nature, or even that we may be seeing the effects of neighboring parallel universes.

But the dark energy problem is not the one that threatens to undermine cosmological experiments. In cognitive science, confirmation bias is the effect where people tend to unconsciously interpret information in a manner that leads to a selection of data that confirms their current beliefs. For cosmologists, this means the unconscious (or conscious) tuning of results such that the final cosmological interpretation tends to confirm what they already believe. This is particularly pernicious in cosmology because unlike laboratory-based experiments we cannot rerun our experiment many times to investigate statistical anomalies — we only have one universe.

Nothing wrong with naming a nebula after what it looks like though, in this case a horse head. (Image Credit: Ken Crawford/Wikimedia Commons)
Nothing wrong with naming a nebula after what it looks like though, in this case a horse head. (Image Credit: Ken Crawford/Wikimedia Commons)

A study that surveyed all the published cosmological literature between the years 1996 and 2008 showed that the statistics of the results were too good to be true. In fact, the statistical spread of the results was not consistent with what would be expected mathematically, which means cosmologists were in agreement with each other — but to a worrying degree. This meant that either results were being tuned somehow to reflect the status-quo, or that there may be some selection effect where only those papers that agreed with the status-quo were being accepted by journals.

Unfortunately the problem is only going to get more difficult to avoid as experiments get better. Ask most cosmologists what they think dark energy will be, and you will grudgingly receive the answer that it is probably a vacuum energy. Ask most cosmologists if they think Einstein’s theory is correct on cosmic scales, and you will grudgingly receive the answer that yes, it probably is correct. If these assertions turn out to be true, how can we convince the wider scientific community, and humanity, that any cosmological finding is not just the result of getting the answer we expected to get?

Ways forward

There are three solutions to this problem that are equally important. Blind analysis is the most straightforward and obvious thing to do, and has also been the most talked about. In this case the aim is to create data sets that have randomized or fake signals in them, where the scientists doing the cosmological analysis are blind — meaning do not know if they are working on the true data or the fake.

Blind analysis, and control samples, are commonly and successfully used in biology for example. The problem in cosmology is that we have no control group, no control universe, just one, so any blind data has to be faked or randomized. Blind analysis has started to be used in cosmology, but it is not the end of the story.

In addition to blind analysis there are two further approaches that are less widely practiced, but no less important. The first is a systems engineering approach to experiment design. In this approach, each tiny aspect of an experiment has a list of demands or requirements and result-independent tests that it must pass before it is used. The idea is that if each sub-section of an analysis passes these tests then the entirety should produce unbiased results. The second is transparency — by publishing data and codes in an open way for anyone to download then there is no place to hide tuned parameters, and dodgy data.

By using these three approaches — blinding, systems engineering and transparency — the next generation of cosmology experiments should be able to convince people that confirmation bias is not a factor in understanding the cosmos. Without them, by looking to the heavens, the most interesting thing we may find is ourselves.


The ConversationThomas Kitching, Lecturer in Astrophysics, UCL

This article was originally published on The Conversation. Read the original article.

Banner Image Credit:Hubble eXtreme Deep Field, NASA/Wikimedia Commons

Thomas Kitching
Thomas Kitchinghttp://www.thomaskitching.net/
I am a cosmologist, astrophysics lecturer and Royal Society University Research Fellow working at the Mullard Space Science Laboratory at UCL. My interests are in dark energy, dark matter, statistics, and computer science. I work in particular on a phenomenon called gravitational lensing. I am a manager in one of the worlds largest cosmology experiments, a European Space Agency mission called Euclid.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured