Before we delve into this post, let’s just take a moment to marvel at everything the scientific method has given us (along with brilliant engineering): space travel, computers, robotics, a much clearer understanding of our biology, etc. etc. etc.
But what I’m noticing is a tendency to see science as the answer to all of humanity’s questions, a panacea to all of humanity’s problems and a some sort of institution immune to the pitfalls of humanity’s darker side–and this couldn’t be further from the truth.
The evolution of knowledge -or- you’ve got to be wrong to be right
The scientific method as we know it today began with an English fellow named Sir Francis Bacon who, in 1620, published a book called Novum Organum, that basically gave us the scientific method. Any scientific researcher from that day forward, until now, uses this method (or should) to come to any sort of scientific conclusions.
Problem is, it’s far from perfect. Since 1620, hundreds if not thousands of theories have come and gone. In fact, there’s a word for a scientific theory that has been debunked: superseded theories and that’s a pretty good list of them. The more obvious ones being that the Earth is flat, or that the Earth sat at the center of the universe.
Let’s explore the center of the universe theory a little further to prove a point. Given the tools available at the time, (this theory was basically accepted through antiquity and not even questioned until Copernicus in the mid-1500s, and not conclusively proven wrong until the late-1600s) you could completely understand why people would consider it true. If you look up at the sky, everything appears to be moving while it feels like you and the ground you stand upon stays completely still. Why wouldn’t people conclude that everything “out there” revolved around the Earth?
Of course, the scientific method could not be applied fully to this theory because tools didn’t exist to measure much of anything during Ancient times. When telescopes came along and advances in mathematics, we switched over to the heliocentric view, where the Sun became the center of the universe. More advances ultimately brought us to where we are now, i.e., a tiny corner of a relatively small galaxy amidst billions of galaxies in a seemingly infinite universe.
But who knows??? That might be wrong too and we just don’t know it yet. And here we have the first problem of science:
It could be just as wrong today as it was back then and we won’t have any way of knowing until the next theory sheds some light on the situation.
Human problems in science, part 1: egos, careers and the natural instinct not to be wrong
Here is a great study on scientific studies – though ironically once you read it, you might be tempted to be skeptical of it. The basic thrust is that most scientific studies are, to some degree, wrong.
In some cases, it’s outright fraud (like this Harvard researcher who was exposed manipulating data to get a desired result) but more often than not, it’s a problem called “publish or perish,” where researchers and academics are expected to publish studies in academic journals (the more prestigious, the better) if they are to keep their jobs, let alone advance in their careers.
The problem is, journals like to publish studies that are exciting, that push knowledge forward with exciting findings and big discoveries. That’s what sells copies. But the vast, vast, vast majority of scientific studies aren’t exciting at all. The hypothesis ends up being wrong, or the topic of study is highly specialized and technical so nobody but a few specialists would even understand it, let along care. Very often, bias creeps in to a study and influences the outcome. Or in an attempt to study something exciting, researchers don’t design the study well with a small sample size, or a poorly-concocted methodology, or they write their findings in an exciting way that obfuscates the less-than-exciting results.
People are lazy and inconsistent as well. In a study on peer review in scientific journals, researchers found that studies submitted to journals A SECOND TIME, after being published by that same journal, with details and identifying features changed, were rejected at a rate of 90% (!) by the same group of peer reviewers, often based on “serious methodological errors.” Basically, they either didn’t read or carefully vet these studies the first time… or they didn’t read or carefully vet these studies the second time. Either way, it’s clear things are getting through to publication that shouldn’t. And here we have the second problem of science:
It could be wrong because researchers want to advance their careers (or simply keep their jobs) and journals want to sell more copies and peer reviewers might be lazy, incompetent or both.
Human problems in science, part 2: money
There is a lot of money in consumer goods, be they food products or pharmaceuticals, that is dependent on approval by a government agency based on scientific studies. And companies are not at all above paying scientists and journals and doctors off to get the endorsement of their “scientific expertise.”
A few examples:
- The official guidelines for dealing with high cholesterol were written by a group of nine doctors. Eight of them were paid by statin drug manufacturers (the drugs most commonly used to treat high cholesterol).
- The psychologist who pioneered the use of stimulants like Ritalin for ADHD was paid almost $2 million by stimulant drug manufacturers.
- In the 1960s, the sugar industry paid scientists to say that it was fat that made you fat instead of sugar.
These are just a few high-profile and easily-digestible examples. There are so many more, but you get the idea. And we have now the third problem of science:
Vested interests can influence the course of science and what we believe to be true (and thus human behavior) by buying off the right people.
Human problems in science, part 3: politics
Can we acknowledge that certain uncomfortable truths simply might not be popular, so talking about them automatically demonizes the person who brought it up, so they prefer just not to bring it up?
In science, this sort of thing becomes a big problem. There are political issues where science might have something valuable to contribute to the discussion, but the socio-political environment makes that nearly impossible.
One example is climate change. It’s such a politically tricky topic that natural scientists working in this field would never even DARE to present a hypothesis that countered the general consensus that currently exists on the issue. And if they do question the status quo, they are ostracized. Take the example of University of Georgia Professor Janet Curry. Based on her analysis of the data, Curry suggested human-caused carbon emissions weren’t changing climate as fast as others had suggested. That’s all. She acknowledged that the Earth is getting warmer. She acknowledged that, at least to some degree, humans had something to do with it. She simply said her data suggested it wasn’t happening as fast as others though. As a result, she lost speaking engagements, publishing opportunities and was branded a “heretic” (interesting choice of words) and a “climate science denier” by her colleagues in the field.
Or for a British researcher to suggest one type of vaccine given to a very specific subset of children might increase their chances of becoming autistic* resulted in a global firestorm that still reverberates with misunderstanding and vitriol on both sides to this day.
Again, only a few of the many available examples. But leads us to the fourth problem of science:
What we consider true is often influenced disproportionately by the social and political climate of the day, despite the permanence of truth and the transience of socio-political sentiments.
Does this mean you should completely disregard science? No, but remain skeptical and trust your common sense more than anything. It’s like Dr. Spock said in that old parenting book, The Common Sense Book of Baby and Childcare, “Trust yourself. You know more than you think you do.”
Secondly, there are other forms of knowledge that we can lean on when in doubt: logic (yes, it’s different than science), intuition, spirituality, etc.
Third, just have some perspective and give more weight to what you (or those you trust) have seen actually working in the real world. Thankfully, the big questions of science probably won’t have too much of an effect on your life. That is, unless you get involved in the medical and/or pharmaceutical industries. So watch out 😉
*For clarity, Andrew Wakefield saw a correlation between boys who were administered the triple-dose MMR vaccine before the age of 15 months AND who also previously had gastrointestinal issues and the development of autism. Nothing even close to the demonized versions of the argument presented by opposing sides that “all vaccines cause autism” or some other such dramatic over-simplification.