« Back to Humane Connection

Humane Educator’s Toolbox: Bringing Critical Thinking to Scientific Studies & Their Reporting

Written by Marsha Rakestraw | 2 Comments | Published on May 14, 2013 | Filed under Humane Connection
The content that follows was originally published on the Institute for Humane Education website at http://humaneeducation.org/blog/2013/05/14/scientific-studies/

laboratory glasswareSkim the news headlines most days, and you’ll see news reports about scientific studies on all sorts of topics. The trouble is that many of the news stories offer soundbite analysis and quick conclusions, many of which are misleading or lack relevance or rigor. And we as citizens often go no further than skimming the headlines and adding their bold assertions to our body of “knowledge” about health, the environment, education, and more.

Neither we nor reporters usually delve into these studies and examine the details in depth. Who has the time?

We can’t put every scientific study we hear about under a microscope, but we can develop the critical thinking skills that will help us better assess the validity and relevancy of these studies and help students to do the same.

One thing we can do is consider the studies themselves. A recent AlterNet article, “6 Ways Scientific Studies Can Trick You,” highlights some of the tactics (sometimes intentional, sometimes not) that can affect the conclusion of a study, such as:

1. Start with a wrong assumption.

2. Throw out data you don’t like.

3. Set improper threshholds.

4.  Share findings that aren’t statistically significant.

5.  Design the study to get the desired results.

6.  All of the above.

Although we’re taught that science is an objective undertaking, the truth is that variables, data, and framing can be manipulated.

Dr. Ben Goldacre’s TEDx talk “Battling Bad Science.” is very helpful in demonstrating how evidence can be distorted and studies can be manipulated.

In fact, there are so many instances of manipulation, inaccurate information, errors, and even fraud and plagiarism occurring in scientific studies now, that websites like Retraction Watch have popped up to monitor and report on them.

Additionally, all of us, scientists and citizens alike, bring along our own biased lenses. Just one example:

For hundreds of years, the common practice (and common thought) has been to require conducting experiments on animals, and the belief that animal experiments are relevant and necessary to help humans. But recent research is causing some to begin questioning the validity of studies using animal testing (at least for testing the safety of chemicals), saying that:

“The results of these experiments challenge the longstanding scientific presumption holding that animal experiments are of direct relevance to humans. For that reason they potentially invalidate the entire body of safety information that has been built up to distinguish safe chemicals from unsafe ones. The new results arise from basic medical research, which itself rests heavily on the idea that treatments can be developed in animals and transferred to humans.”

These new studies themselves need to be evaluated, but the potential ramifications throw a whole lot of previously-accepted science into question.

And then there’s the way scientific studies are reported. As Gary Gutting mentions in a recent New York Times commentary:

“Media tend to present almost any scientific result they report as valuable for guiding our lives, with the entire series of reports accumulating a vast body of practical knowledge. In fact, most scientific results are of no immediate practical value; they merely move us one small step closer to a final result that may be truly useful. Too many news reports present experimental results as providing good advice on which we can reliably act.”

Gutting goes on to offer the idea of a labeling system for scientific reporting to help clarify the validity and importance of a study:

“Is it merely a preliminary result (a small-scale heuristic study meant to suggest a hypothesis that will itself require many stages of further testing before we have a reliable conclusion)?  Is it a larger-scale observational study (showing a correlation but by no means establishing a causal connection)?  Is it a large-sample randomized controlled test (establishing a causal connection, given specific conditions)?  Or, finally, is it a well-established scientific law that we know how to apply in a wide range of conditions?”

Scientific studies and their reporting offer a valuable opportunity to hone critical thinking skills and to remind us not to blindly accept what we read or are told. In fact, since science is wrapped in such a cloak of authority and credibility, it’s vital that we look beyond those news headlines and dive deeper into the details — and teach students to do the same.

~ Marsha

Continue the conversation! Leave your comment below, and “like” and share this post via your social media sites.

 

Categories: Humane Connection

Tags: / / / / /

About Marsha Rakestraw

Marsha is IHE's Director of Education Resources and Alumni Relations and part of the online course faculty. More

Contact Marsha View all articles by Marsha Rakestraw

2 Comments

Anna says:

Often I think it is the journalist doing the misrepresentation rather than the researchers themselves. It is true that studies not showing desired results can get thrown out, and that’s a terrible problem in science. But within science journalism there is a different set of problems. Like you say, a journalist might report on one scientific study without looking at the greater context. A single study just represents one aspect of the bigger picture — it might go against the scientific consensus, or it might support the consensus. I think it would be great if more journalists investigated the bigger picture when reporting on scientific results.

Also, I often notice that a scientific study will get media attention just because its subject matter grabs headlines. Examples from this year include reporting on pubic lice and a sexually transmitted virus, and their (alleged) connection to pubic-hair removal. (I discuss those stories here.) When I looked at the original studies (which were actually presented as letters to the editor, and based on very small studies with no control groups!), I thought they weren’t newsworthy at all. Valuable for reporting within scientific journals, of course, as it could give other researchers ideas for well-designed, controlled studies — but in the media they were misrepresented and their conclusions overstated. The scientists themselves were more honest, openly admitting that they were speculating and that further study was needed. But the journalists just reported their tentative conclusions as facts.

Marsha says:

Anna, thanks so much for your comments and for those great insights. I think a couple takeaways we share are that:

1. Not everything is newsworthy :)

2. We’re all fallible. And because we are, it’s essential that we as citizens, educators, activists, and students don’t take (at least most) things at face value. Thus the value of bringing a critical (meaning thoughtful, not judgmental) lens to what we encounter.

Thanks so much!

Peace,

Marsha