“Recent studies show….”
“Scientists have proven…..”
Statements such as these are all too common in the media these days, whether it’s a news program looking for a good headline, or shared articles on social media. (You will never believe WHAT HAPPENS NEXT!) Unfortunately, these sort of claims are greatly misleading, and are often taken fully out of the original context, applying conclusions to data they were not designed for. So, I thought we would take a look at the most common misinterpretations of scientists, research, statistics, and the scientific method.
I. Mad Scientists
Claiming that “scientists say” is one of the most vague statements ever made, yet it subconsciously makes any claim more relevant. First of all, “scientists” are not a professional body such as doctors or lawyers. Any person doing “science” is a scientist, there is no science test you have to pass to wear a lab coat. Of course, science is rigorously overseen by peer review, and often stringently judged before being published in accepted journals, but there is nothing stopping you from publishing your findings yourself. Just because a “scientist proved” or “an experiment showed” is no reason to intrinsically believe the information without understanding the experiment.
Furthermore, even a claim by a well-respected scientist is nothing without the data to back it up. (Also, stop with the Einstein qutoes, the majority were not from him, also we overlook the fact that a mathematician is not automatically an expert on God, education, or philosophy.)
This is what separates scientists from oracles. Well, that and oracles are a bit more fun to visit. In any case, just hearing that “a scientist claims” should not factor into the reliability of the claim.
Summary: scientists are just people, no one decides who is and who isn’t a scientist, and a true scientist will always be ready to provide the data leading to his conclusions.
II. Method in the Madness
How does science work? Contrary to common belief, standing around in lab coats with beakers is not science. Summarized it is: Observation->Hypothesis->Testing->Theory. Observation is looking at the world, finding something interesting, and formulating a research question (Why do apples fall? Do storks really bring babies?) Once you have your question, it is time to develop a hypothesis, a statement which you hope to either prove or disprove (Apples fall because of gravity. More storks will lead to more human babies.) Once you have your hypothesis, it is time to gather data which will test it. If the hypothesis is proven, you will formulate it into a theory. A theory is more general than a hypothesis and will also be able to predict one variable, for example, number of babies, based on another, ie. number of storks. This theory will then need to be proven again, leading to further research and testing. Now, the scientific method is just the general pattern of the research; the exact steps taken vary greatly depending on the field and research topic, but this scheme holds true for most research.
Here is what may not know: science cannot PROVE anything. Science works by disproving. I cannot definitely prove that gravity causes apples to fall, I can only show that there is no other visible force causing them to fall. Falsifiability is the defining trait of a scientific theory, meaning that no statement is scientific unless it can be proven false. This may feel like splitting hairs, but it makes a huge difference in our understanding of the reliability of science. To quote that inimitable paragon of science, The Core, “That’s all science is: best guesses.” You may ask, if I disprove all other alternatives, have I not proven my original idea? Yes, and no. Yes, in the sense that, as far as we can tell, this is the best explanation. But also no, because we cannot know what we do not know. There may be other explanations which we cannot foresee because we have not yet discovered them or cannot measure them. For example, yes, gravity makes apples fall. But in 100 years we may discover that gravity is part of another force, or that it is does not hold constant, or any of a number of other possibilities. This gets particularly relevant when dealing with people, because it is much easier to miss an important factor. And this is why one should always be skeptical of studies showing that “All men are secretly rapists”, or “Video games lead to violence.” Even the most perfect study can overlook a phenomenon that hasn’t been discovered yet.
So, when we are judging the validity of a scientific claim, these are the questions we should be asking:
Who is conducting the research and why?
What are they trying to prove/disprove?
What data are they using or collecting and how are they testing with it?
Is the conclusion in line with the data collected?
Is there any other factor which could reasonably also account for the phenomenon?
III. My statistically relevant milkshakes bring all the boys to the yard
Statistics sound extremely boring for anyone not working in mathematics or science, but in fact, we all love them. These are the fun claims we like to play with: “We only use 10% of our brains,” “House dust is composed to 80% of human skin,” and “Swallowed chewing gun remains in stomach for 7 years.” (All false, by the way.)
So, here are the biggest misconceptions regarding statistics:
“Statistically relevant link between….”: When comparing two variables (for example, intoxication and driving accidents), there is a chance that the link that one proves is completely random. Generally, this chance is set at around 5%. A correlation (higher intoxication = more accidents) is statistically relevant when there is only a 5% chance of it being a random fluke and meaningless. This does NOT mean that the correlation is relevant, nor does it reflect how relevant the correlation is. Statistically relevant refers only to the process of analysing the link, and has nothing to do with the variables themselves.
Correlation is not causation: Even if you were to prove a link between two things, that does not mean that one causes the other. For example, there is a correlation between the amount of water people drink and the amount of ice cream they eat, i.e. a rise in one entails a rise in the other. But neither causes the other, rather higher temperatures make people drink more water and eat more ice cream. This one is particularly misrepresented in the media, because while a link between two things may sound interesting (and may indeed lead to follow-up studies that are very interesting), it is almost meaningless out of context. Going back to our stork example, there was in fact a study which showed a statistically relevant, direct correlation between the number of storks and the number of babies born in Germany following WW2. In fact ,a Harvard student has created a website called Spurious Correlations where he shows correlations between ridiculous facts, such as the number of movies Nicholas Cage stars in and number of pool drownings. Link here.
Average misinformation: Most of the numbers you hear in the media are averages, but the average is not particularly relevant. Distribution is much more important. Let’s look at children raised by their grandparents: the grandfather is 72, the grandmother 69, the son is 5 and the daughter is 7. Now, the average age in the family is 38.25. A family where the the parents are between 45 and 50 and the children already 20 will have the same average age, but be markedly different. And this is just one problem with averages. Let’s say you wrote 5 exams at school and got between 98-100% for each one, but then were sick for the fifth and received 0%. Your test average is now below 80%, which reflects much more poorly on your academic record than your actual performance. Here’s another example: in Zürich the average income is over €4,000 a month. If you wanted to work there as a cleaner or waiter you would however not be earning anything near that number, because the average is so inflated from all the high-earning bankers, CEOs, etc. living in the city.
Data collection: Not only can data be misconstrued and misused in a number of ways, there is still the question of what data they are using altogether. Humans are almost ridiculously bad at estimations, even about themselves. “How much Coke do you drink on average?” “How often do you exercise?” “How happy are you?” all provide answers which may be very different from reality, and that’s ignoring questions that people often lie on, such as weight and income. But there are still more problems! The framing of the question can also influence the answer (Are you vegetarian or an animal murderer?), and when the answers lie between 1-5 (On a scale of 1-5 how much do you trust biased statistics?) people tend to answer within the middle ranges. And often there is just no data available, for example, I believe it was a university in Canada that wanted to research the effects of online pornography on male students. They were unable to conduct the study however, because they could not find any students who had never viewed pornography. Besides being both a little bit funny and a little bit sad, this goes to show that no reliable conclusion can be drawn from faulty or missing information.
So in conclusion: the media is lying to you (which you and every conspiracy theorist knows), but some of it may even be unintentional. When you hear some crazy study (Women are more open to romantic advances when they are not hungry), it takes just a brief moment to check whether it is reputable, or if you don’t have time for that, at least to see if it was published in an accredited journal. Also, the biggest and most forgotten point in science is this: knowledge is in motion. What is surely known today may be ridiculous tomorrow, and closing established theories to constant checking is the best way to prevent us going forward.
Also, bonus Cage.