There’s a game young children like to play when they’re just beginning to learn how to interact with the world, talk to others, and indulge their natural curiosity: it’s called the “Why?” game. Take some natural phenomenon: Why is it raining? Why do people die? Why is the sky blue? If you furnish the child with an answer, they’ll inevitably ask “Why?” again, until you reach the limits of your knowledge (or your patience), and snap back with an unsatisfying: “Because that’s just how it is.”
Most people eventually grow out of the “Why” game. This is partly because, as a conversational strategy, it’s irritating. But also, people come to terms with there being a set of fundamental “whys” where the answers won’t necessarily be clear. You can fill this gap with your religion, or with something like the multiverse theory combined with the weak anthropic principle, or any number of other philosophical explanations. You can use a nice healthy dose of pragmatism and choose to worry about more pressing matters before trying to solve the mysteries of the universe.
But the “Why” game is still a useful one to play as an adult, testing each link in the chain of your beliefs, exploring your own motivations, and examining the trends and forces behind changes in society. So, in this spirit, I’ll ask a question that’s been more and more popular lately: “Why should I listen to scientists?”
This question is leading politicians towards inaction on climate change, individuals towards not vaccinating their kids, and consumers to oppose genetic modification of foods, just as social media—where every opinion seems to have equal weight—leads thousands of people into bizarre conspiracy-theory wormholes.
As the world grows more complex and interconnected, with threats from emerging technologies, biodiversity collapse, and climate change, it’s more necessary than ever that we work together in a rational, constructive way. But when you start from irreconcilable standpoints about what is true, or even how to find truth, practical problem-solving becomes impossible. So how can we persuade people that expertise isn’t overrated?
That Which Can Be Proven
For many of the sciences, especially physics, the answer to this question was once obvious: the experts are right. Verifiably, nigh-on indisputably, their theories have the power to make accurate predictions that competing world views cannot make. Newtonian mechanics, when its laws were applied properly, could predict how the stars and planets would move. When Einstein’s theory of general relativity superseded Newton’s gravitational law, this was confirmed by a famous expedition to observe the deflection of light during an eclipse, a prediction that general relativity made which Newton’s gravity could not explain.
On the surface, this line of argument is persuasive—but it is also flawed, and far from universal. What if the theory is complicated enough that drawing that straight line—from theory to prediction to observation that confirms the theory—is far from obvious? Why should people believe it then?
The invention of writing has allowed scientific and technological knowledge to accumulate over thousands of years, and people have had to specialize to a greater and greater degree: the age of polymaths who knew everything is over, and, as the economists like to say, no individual person can make a pencil. Instead, more often than not, you have to study and hone your expertise for many years, in ever-narrower fields, to make a contribution to our understanding of the world. In physics, for example, Nobel Prizes are increasingly won not by individual geniuses, but by ever-larger collaborations of scientists, running experiments that cost billions upon billions of dollars.
That Which Cannot
This is before we consider newer sciences with larger uncertainties attached. Climate science is a prime example, where the complexity and inherent uncertainties associated with the system that’s being analyzed prevent us from making absolute statements about precisely how, for example, rainfall patterns will shift if we continue to emit carbon dioxide into the atmosphere for the rest of this century.
So scientists make predictions with uncertainties attached; when you read climate change reports like those from the IPCC, their claims are ranked according to confidence levels. Things like “more greenhouse gas emissions will increase temperature” are practically certain with the current state of scientific knowledge, but precisely how complex systems like the Antarctic ice sheets will respond is still a subject for scientific inquiry and debate.
Similarly, you’re probably familiar with an endless parade of headlines proclaiming that red wine, or chocolate, or caffeine, are “good for you” or “bad for you.” The human body is an extremely complex system, and a crisis of reproducibility means plenty of studies can be reattempted with contradictory results.
In light of all of this, we must still persuade people of the truth: that science remains humanity’s best tool to understand the universe, to survive, and to flourish. That far from ignoring scientists and experts, we need them to take on a greater role.
We need a different kind of faith: trust that the institutions of science are behaving in an honest and rigorous way. We cannot simply answer the “Why” question with “Because science says so” and pretend that scientific knowledge is indisputable in all cases—even in areas where there is active debate. In the age of social media, where experts who’ve spent decades studying an issue have equal platforms with the gut instincts of strangers, people feel freer to believe whatever they prefer to be true.
The Values Behind It All
Professor Harry Collins, a sociologist of science, suggests that rather than portraying science as a fount of utter certainty, we should focus instead on its values. If you present uncomfortable knowledge as reams of technical jargon, or handed down from on high by geniuses who you couldn’t possibly understand, people will feel attacked. Present the working method of the scientific community, and people will recognize values that they treasure. Science relies on observations and logical deductions. It is open to criticism—and scientific research is usually picked apart by fellow scientists before it can be published.
The greatest rewards aren’t for reinforcing existing paradigms, but coming up with totally new discoveries or theories that can persuade people to abandon the old paradigms. Scientific knowledge should be corroborated, and the mechanisms for finding it should be reproducible.
Scientists can disagree with each other, and they can be wrong, but they show their working and the evidence that they rely on. So, Collins argues, your trust in scientific conclusions should rest on the openness, collaboration, and expertise of those making the claims. Scientists can point to a track record of successful predictions, or to the mountains of evidence, thought, and theory behind what they say; but they can also point to the set of values that means you should trust their conclusions.
But this requires a change in perspective from the sciences and those that promote them. Most of all, we must stay true to the mission of the sciences: not as servants of profit or privilege, but of seeking truth so that we all might live better lives.