Science can be done a lot of different ways. Sometimes it's observational, done by researchers quietly watching and noting what's happening. Sometimes it's experimental, done by manipulating things in a laboratory or a hospital. Sometimes it's statistical, done by crunching numbers and finding patterns in data that other people have gathered. There are combinations and variations on all of these. When you start to parse it out, there are a lot of ways to do science. Here are some basic approaches for medical research alone:
No matter what the method, the goal is generally the same: to understand the world and universe we live in. But there are many ways to get there. Before you can weigh the value of the latest science news, you have to pin down what kind of study was used.
Here's a tip: Laboratory experiments are more controllable – researchers can generally do a better job of getting rid of factors that might interfere with the question they’re trying to answer (variables)– and provide more definitive answers. Observational and survey studies can often be more fuzzy. You can carefully define the behavior of chemicals in the controlled setting of a lab, but the effects of personal behavior on health involve a lot more factors that can throw off results.
The more variables there are, generally, the harder it is to come to firm conclusions. Diet and health studies are famous for this. Say you survey a lot of people about what they eat, and then track their health. Maybe you find something interesting: People who eat a lot of kumquats have longer life spans on average. And the inevitable news story comes out, “Kumquats Help You Live Longer!”
Don't believe it. First thing is to know the difference between correlation (one thing seems to happen at about the same time or in about the same amount as another thing) and causation (one thing causes another thing). I like this graph from our friends at Spurious Correlations, which certainly looks like a correlation. But causation, um, maybe not:
Surveys themselves are usually questionable. There are a lot of ifs, buts, and maybes involved in making and analyzing surveys. How the question is asked is important. How the survey is sent out, who responds and who doesn't, and how the numbers are crunched all make a difference. A lot of tools have been developed to make survey work more accurate, but there is no way to eliminate all the biases.
If you don't believe me, take a look at the recent poll results for the Democratic Party primary in Michigan. All of them – all of them – showed Hillary Clinton winning by, on average, around 20 percentage points. These were done by the most sophisticated survey experts on the planet. And they were all wrong. This example might not be scientific, but it shows the pitfalls of depending on surveys alone to draw important conclusions.
There are questions to ask with other kinds of reports as well. Were medical studies done on humans or animals (mouse and rat results are notoriously hard to extend to real humans)? How many people were studied (the more, the better)? For how long?
And so forth. Once you start looking critically at how the study was done, you're well on your way to getting a better understanding of its value.