6 Shocking Studies That Prove Science Is Totally Broken
Even if you're not all that into science, it's still a big part of the news that reaches you on a day-to-day basis -- you'll see interesting headlines about how studies show marijuana cures loneliness or how other studies say pot ruins your memory, and you kind of just assume they're true. If scientists say eating Cheerios will lower your cholesterol, you feel better about buying them. After all, if you can't trust scientists, who can you trust?
But one thing we've learned, as a site that likes to publish reader-friendly science articles that link to these studies, is how much of the stuff that comes our way is, well, worthless. As it turns out, the problem is ...
A Shocking Amount of Medical Research Is Complete Bullshit
For all the advice they keep throwing at us from the field of medical science -- eat less meat, or more fiber, or fuck fewer drainpipes -- it may seem like medical research is moving forward at a breakneck pace. But then, why do so many health-food crazes seem to disappear as quickly as they arrived? Why do we still not know whether vitamin C can help cure your cold? Frighteningly enough, it's because most medical research is bullshit.
You had the power to get a boner inside you all along!
Don't take our word for it -- listen to Dr. John Ioannidis and his team of celebrated meta-researchers. Their job is to comb through all these awesome-sounding medical studies to assess their validity. And, surprise: Up to 90 percent are critically flawed in some way or another. If you're hoping this is contained to fringe, little-used research, find a new vessel for your misplaced hope. Ioannidis and his crew examined 49 of the most highly regarded medical findings in the last decade or so -- between a third and a half of them where straight-up wrong or highly exaggerated.
That's why all the stories about "promising new research" tend to come and go with the speed of Disney pop stars ("Take Omega-3 to prevent heart disease! Or don't! Who the fuck knows?"). And it isn't just the stuff that turns up on BuzzFeed: We're talking about the body of knowledge that your doctor relies on when prescribing drugs, giving advice on dietary habits, and recommending surgery, among other things.
This is why all operating rooms now have a live feed of trending Twitter hash tags.
How can so many studies be so badly flawed? Well ...
Many Scientists Still Don't Understand Math
Science tends to require the use of numbers. And while most of us probably have a tough time figuring out what all those numbers and letters and Greek symbols in algebra equations are supposed to mean, we're content to leave it to the experts to do all the understanding for us. Man, it would be hilariously terrifying if those experts turned out to be as clueless as the rest of us, wouldn't it?
If God wanted us to know trigonometry, he wouldn't have given us calculators.
Enter Kimmo Eriksson, a Swedish mathematician. He decided midway through his career that pure math wasn't doing it for him anymore and moved into cultural studies. It was at that point he realized his new colleagues were basically awful at math. So he conducted an experiment to find out how widespread the issue was. Eriksson picked two research papers at random and sent them out to a bunch of scientists. In half of the papers he randomly added an equation that had nothing to do with the study whatsoever, and in context was utter nonsense.
Eriksson asked the recipients to judge the quality of the research. The mathematicians and physicists were basically unimpressed, but in every other field the inclusion of the equation got the papers a higher ranking, even though it was pointless bullshit -- it just looked more impressive with the complicated math in there. More than 60 percent of the medical researchers, the people trying to save all of our lives, ranked the junk papers as better on the grounds of, "It must be right -- look at all this awesome math shit he's got in there!"
Seems legit.
The research by Eriksson (or "Kimmo the number wizard," as he is known in the humanities) is not the only evidence that scientists treat math as some mysterious occult force. Research into ecology and evolution show that papers are 28 percent less likely to be cited for every additional equation per page. It seems that basically everyone that isn't a physicist or engineer treats math with a policy of "run away as quickly as possible."
... And They Don't Understand Statistics, Either
If we say a study found a "statistically significant" link between the use of feather pillows and brain cancer, what do you think that means? It means the scientist found something that you'd better damned well pay attention to, right?
Put down your Spirographs and pay attention! Science is talking!
Not really.
"Statistical significance" is just the fancy name for what happens when you see a relationship between two variables that probably isn't due to random chance. A hell of a lot of scientific research involves investigating relationships in statistics, like whether a certain drug has a correlation with getting cancer, for example. The problem is that, in this context, "significant" doesn't necessarily mean "important." For instance, there is a statistically significant link between ice cream consumption and murder rate. But before you start burning ice cream vans, this is just a confusion between correlation and causation -- ice cream consumption and murder both just happen to increase in the summer.
Which both happen to coincide with all the good TV shows going on hiatus.
If you didn't know about how weak a "statistically significant" finding is, then don't worry -- neither do scientists. When they find a link between sleepiness and vitamin D, or whole fruits and decreased risk of type 2 diabetes, they call it "significant" and, more times than not, end up exaggerating the hell out of their claims. The media ends up reporting it inaccurately because the researchers don't include the proper caveats. One statistician took a look and found that "eight or nine of every 10 articles published in the leading journals" make the massive error of equating statistical significance to importance.
As an example, one recently published study purported to have found a link between walnuts and a drop in diabetes risk. How'd they discover that? Well, by tracking a whole bunch of nurses, looking at their walnut consumption, and seeing which ones developed diabetes. To the layperson, and by extension the media, this sounds like a pretty cut-and-dry way of studying the phenomenon. But think about that -- did they look at other factors, like whether people who ate fewer nuts also tended to go home at night and eat a whole tub of butterscotch ice cream? Nope -- they just asked the participants how often they ate walnuts and used the answers as the basis of their conclusions.
Additionally, squirrels generally make for a poor control group.
By the same token, we could investigate the link between Apple products and hipster mustaches to conclude that iPhones somehow stimulate hair growth on the upper lip. Or as this sarcastic study pointed out, that you can statistically prove that listening to certain music makes you younger.
But that just brings us to another point ...
Scientists Have Nearly Unlimited Room to Manipulate Data
When you set out to test something, like if you're trying to figure out if wolf bites have a statistical link to werewolfism, a whole lot of your results are decided before you even start. As was pointed out in the intentionally silly study we mentioned above, in any experiment the scientist gets to decide which things to compare (What about other animal bites?), how long to collect the data (Would you get different results six months from now?), which data to include (Are you accounting for the subjects' age? Diet? Ethnicity? Phase of the moon under which they were bitten?), and on and on -- countless little choices about what to include and, more importantly, not include in the study.
What about werewolves in will-they-won't-they romances that are
abruptly resolved with vague implications of pedophilia?
So, for example, here's an experiment you can try on your own: Look up some psychological studies on the Internet. Every time the participants are anything other than college students, take a drink. Congratulations! You're probably still as sober as a Mormon priest. That's because when psychology professors are looking for test subjects, they have the overwhelming tendency to use the large pool of students they see staggering around on campus. It's just so much easier than going out into the world and actually rounding up a cross-section of random folk (and law enforcement frowns on going out in a van and just snatching them in the dead of night).
That means a whole lot of behavioral science is centered around studies done in First-World universities, and those studies fall prey to the assumption that their young, relatively healthy, sedentary, economically privileged, and mostly white test subjects are in any way indicative of the people who make up the population as a whole (i.e. the other 99.7 percent of the world's inhabitants).
"Fine, you can throw a woman in there to even things out. But only one."
Unfortunately, a study illuminating the psychology of an average college student, whose primary philosophy is to have sex with that redhead in Anthro 101 and who lives on ramen noodles and liquor, may not be applicable to even, say, a poor single mom living three blocks away from the university. So behavioral science has developed tunnel-vision on the richest, most open-minded fraction of educated young people in the world, and assumes whatever answers it finds can be generalized to anyone else.
But hey, scientists are people, after all, and they study what they know. Funny, then, that a fairly popular avenue of research involves participant observation of strip clubs. As in, scientists receiving grant money to sit and watch strippers pole dance. You know, for science.
It's said that Einstein did some of his best work while getting a lap dance to "Pour Some Sugar on Me."
The Science Community Still Won't Listen to Women
If you're listing the body parts that are most useful to you when it comes to doing good science, chances are that a brain and a functional set of hands are somewhere near the top of your list, while penises probably don't rate quite so highly. But it turns out that when they're deciding who is going to do a better job unraveling the mysteries of the universe, the people in charge of hiring scientists seem to find what's between your legs a lot more relevant.
Hey, at least we're not burning women who can do math as witches anymore, right? Progress!
To see just how bad this is, a team of researchers created a fake application for a laboratory manager's job and sent a copy to 127 professors. They asked the recipients to evaluate the applicant's competence, how deserving he or she is of a job, and to determine how much he or she should be paid. Every application sent out was the same but for the name, which was randomly assigned as either male or female.
The scientists did what they do best. They took an objective look at the evidence presented, assessed the individual's demonstrated merits, then ignored that and decided "likelihood of having boobs" was the key factor. They gave female applicants lower marks on every single point.
All they need is one woman to get PMS and start integrating with total disregard of the constant term!
On competence, male professors judged their penis-wielding brethren at 4.01 out of 5, and gave the ladies a score of 3.33. And no, it's not just because the people doing the hiring were male -- the female professors also scored the boys higher, 4.10 compared with 3.32 for the women. The average salary men recommended for men was $30,520. The average salary women recommended for women was $25,000. Favoring the dudes happened irrespective of the age, gender, or job status of the professor. Unless the researchers went with the names Mr. N.D. Tyson and Mrs. S.L. Palin, this would have to mean that scientists universally judge females to be worth less. The professors also reported that if they were in charge they would spend less time tutoring the female candidates.
This sort of sexism is doing a very good job at pushing women out of science altogether. The higher up women get in the fields of science, technology, engineering, and mathematics, the longer they have to tolerate this sort of shit and the more likely they are to quit. But hey, it's not like there are any negative consequences to arbitrarily wiping out a huge chunk of the talent pool, right? Since when has any field been improved by inviting a whole bunch of new geniuses into it?
It's All About the Money
On some level, we're all aware that scientists are regular people. They eat, sleep, and enjoy dressing up in furry costumes just like the rest of us. But we also like to think that scientists are somehow above the greed and bias that clouds the average human brain. Maybe that the years of bullying and unintentional celibacy rob them of such base human failings. At the end of the day, it's all about the cold, hard data.
Rarely does one find a gun clutched in its fingers, though.
Sadly, the truth is that money and ambition tend to kick the crap out of objectivity at every step of the research process. To get tenure, funding, and maintain their prestigious positions, researchers are under constant pressure to publish their results in the best journals. Those journals want only important, interesting findings, which means that researchers have every motivation to find crazy cool things, regardless of how much bullshit might be required to get there.
And because "conflict of interest" is only a problem if you have morals, various industries often hire researchers to study the safety and efficacy of their products. This works out just about how you'd expect. When, for example, drug manufacturers pay for studies, they mysteriously tend to find that their meds are more effective than non-industry studies do. We'll call this the "makin' it motherfucking rain" effect, and it generally holds across all industry-funded studies.
But our study that reading Cracked makes you 37 percent more attractive is totally legit.
Happily, these totally-valid-you-guys-we-swear studies are often used by the FDA (and other entities) when assessing new products. Usually, these folks stick to the typical methods of massaging results, but sometimes the conclusions their employers want them to reach are so far from the truth that no amount of statistical foreplay is going to get them all the way. That's when straight-up fraud comes in handy.
Just to be clear: It's not that you should suddenly stop trusting science in general -- without science it would be impossible to distinguish charlatans from people who have actual wizard powers. But there's a big difference between accepting scientific consensus and just blindly believing everything said by a guy in a white lab coat.
Particularly if he has a maniacal laugh.
Always on the go but can't get enough of Cracked? We have an Android app and iOS reader for you to pick from so you never miss another article.
Related Reading: Maybe after reading this you'll be less surprised that science doesn't even really understand water. On the plus side, it has proved that diamonds don't come from coal. And if you think that's impressive, check out the things scientists are building purely to gross us out.