So, you're on Facebook or twitter and suddenly a headline catches your eye. After reading the headline, your sense are tingling- it is catchy, fun, more or less informative. What do you do now? Before you go and share it with your 5 million twitter followers and vow to never eat/drink/do whatever the headline is telling you to avoid, hold on a second.
First try to figure out the credibility of the source.
Let me first introduce a basic credibility of health research check system, that might help you going forward. Before we dive into this, though, let me say, if it's a story that doesn't seem to affect you or you don't particularly care about, you probably won't take the time to do this. And that is fine - I am focusing more on stories that may impact your behavior - your eating/drinking habits, your exercise regime, how often you see a doctor, what medications you take, and so on.
This system I want to propose is meant to make sure that you are actually using real, quality information before altering your way of life, in ways both big and small.
So without further ado:
Step 1: Who wrote it and what website is it on?
We have to understand a few things right up front: this is a popular media publication, it is not the original source of the information, it was most likely published without the consultation of the researchers, and it is competing for your attention with all of the other millions of online news stories. It will be catchy, it will have pictures, jokes, and grand statements. This does not mean they are necessarily wrong - but we must keep those things in mind.
By looking at the authors may give you some hints to their credibility - of course someone working or have a background in the field of neuroscience may have more insight into that field than a regular journalist, but other factors are important as well. It really depends on how deep you want to go here: you could look up where they work, and see if there are some conflicts of interest - such as - completely hypothetically- if the scientist were a consultant for a major wine manufacturer and the story was about how wine helps you cure cancer.... you may want to think twice. A famous true story though, was the case of an aerospace engineer and a part-time employee at the Harvard-Smithsonian Center for Astrophysics, who was one of the few respected scientists who spoke against the general consensus that human activity is a significant contributor to climate change.. later shown to have received $1.2 million in research funds from oil companies in exchange for his support and study publications - a conflict of interest that he did not disclose anywhere in his research. Bad. Bad. Bad.
If - by some chance - the author is a researcher or a doctor you can also go on one of the most popular medical science journal website: PubMed.gov or JAMA, the journal of the American Medical Association, or simple good old Google Scholar. By typing the researchers name in the search field, you can see how many articles they have written, on what subjects, and in some cases you can see how important they are in their field by how many times their articles have been cited by other researchers. Quantity of citations is not always a clear cut way to see if they research is credible, but it will at least give you an idea of its level of connectivity with other research.
If the author is just a writer or a journalist, though, which is more likely, click their name, see what topics they write about. Are they a senior writer? Guest contributor? If they write articles both about cropped pants AND developments in neuroscience... I would be weary of their expertise. Personally, I would also trust publications that list more information than less about their writers. If they have expertise and experience, why hide it? We have a right to know who we are trusting when we read an article.
What if there is no writer listed at all? Well, not much you can do there, except to go look at the website itself in depth. But I would automatically be a little suspicious. Credible websites should have writers on staff, and the writers WANT credit for their work. Why hide them?
By looking at the website, you can also try to get some clues at whether you can trust them on this specific subject. For example, an article titled SCIENCE PROVES DRINKING WINE IS BETTER THAN GOING TO THE GYM found on winerist.com May call into question their objectivity on the subject matter. And of course, something published in Time, The Washington Post, The Economists or on a news website such as CNN or BBC, may, by reputation, scope and experience, imply more credibility than something medical published in a blog you've never heard of before or a website that specializes in something completely different. In these high level publications, if not a journalists sense of ethics or education, then at least an editor will fact check the work and call out blatant misrepresentations, since they have a reputation to protect. Blogs, not so much.
Step 2: Where are your sources?
As I mentioned before - unless they are protecting someones identity as an informant - a journalist has to say where they got their information, just as researchers cite facts when they get them from other studies. In both cases this is so others - in this case, me and you! - can follow up on their sources and make sure they are not making stuff up. Once you start to focus in on this issue, it's almost comical, if it were not so depressing, how few articles feel it necessary to cite the original research study, and if they do, they hyperlink words like "a study", instead of citing it outright.
In my experience 2 times out of 5, the links are broken and go nowhere, or go to another blog or publication, instead of the original source.
Test it out yourself! Click them, see where they go! If an article does not cite the original study, I assume that they think we don't care, or they didn't even take the time to read it themselves. Both options are quite depressing.
Step 3: Where was the research published?
It is important to look at the original source to compare the news you found to what the researchers actually said. Here you have the opportunity to evaluate the journal, the research itself, the researchers and how relevant the research still is.
Is the journal peer reviewed?
If you are not familiar with the term "peer review", it means exactly what you think it does. It means that publications in this journal, before they are published, have to go through a thorough review process by other experts in the field. This is similar to a fact check that an editor does in a magazine or newspaper publication. You can probably find this information in the "about" section of the journal. Most big journals are peer reviewed, but of course more influential journals are better and more reliable.
How do you know if a journal is influential?
Type its name into google and the words "impact factor". This will give you a number. The higher the number, the better. This is a ranking system, based on how many times an average article in a journal has been cited in other journals and articles in a particular year. The idea is, the more important the research, the more times other researchers will refer to it. Thought again, quantity alone is not always very reliable, so don't base your entire analysis on this number alone.
Here I do have to mention that a number of journals have popped up in recent years that engage in what is called "predatory publishing" - meaning they solicit researchers for papers to publish for sum of money. These journals have no review process whatsoever, so anything can get published and end up in the researchosphere without being checked for accuracy. This, though horrible and unethical, doesn't seem to be illegal. Luckily, there are people like Prof. Jeffrey Beall, University of Colorada Denver librarian, who until recently, maintained a list of potential predatory publishers and stand alone journals.
In an interview with The Chronicle of Higher Education, Prof. Jeffrey Beall describes the phenomenon this way: "Predatory open-access publishers are those that unprofessionally exploit the gold open-access model for their own profit. That is to say, they operate as scholarly vanity presses and publish articles in exchange for the author fee. They are characterized by various level of deception and lack of transparency in their operations. For example, some publishers may misrepresent their location, stating New York instead of Nigeria, or they may claim a stringent peer-review where none really exists."
There have also been cases of fraudulent peer review - when a researcher was able to recommend someone in their field to review their paper for publication - and they instead made up a name and email, and reviewed their own research. This, fortunately, is rare, and journals have been working hard to catch these kind of activities and retract the research if it was published already.
Which brings us to... who are the researchers?
Here is another opportunity to find more information about the authors. You can do what I mentioned before by looking for their credentials and other publications, butalso check which University they work for, who funded the research, what year it was conducted in, and whether there have been any retractions or corrections - if there were, they would be shown here in the original publication.
Speaking of corrections and retractions - that is one of the problems with popular media headlines - they all publish the original sensational findings, but usually they do not follow up if the original study ended up being corrected or retracted completely. And this happens surprisingly often, with problems such as plagiarism, duplication, error, fraud, falsification of data, and authorship concerns hard to catch right away during the initial review.
According Thomson Reuters there were 22 article retractions in 2001, but a whopping 339 in 2010!
Step 4: How was the study conducted?
First of all, figure out if its primary research - as in the people did the research themselves - or is it a review of other peoples research? If it's many studies, the following steps would have to be repeated for all the studies in turn. Good luck with that.
First if all, who was the study performed on?
Was it rats? Bats? Humans? Men, women, both? The most easily applicable to real life study will be a mix of people all ages, genders, races, incomes, education levels, and so on - a sample representative of the general population. Unfortunately, many studies are performed either on animals, or on people who are representatives of WEIRD societies. WEIRD, or Western, Educated, Industrialized, Rich and Democratic, - this is from a fantastic article published in the Journal of Behavioral and Brain Sciences called The WEIRDest people in the world. It's definitely worth a read, and not written in too technical a language. SO why are WEIRD people overused in studies? Because college students, which are found in abundance at Universities where a lot of these studies are conducted, are also very eager to participate for compensation. This means that findings are generalized to the entire population, when the study itself only focused on a very specific type of people, which are more often than not the minority.
How big was the sample size?
10 people? 100 people? 1000 people? This matters when they report percentages. "50% of participants is a lot less legit when there are only a few participants involved and generalizing to an entire population, when you only studied 10 people - is not real research. It may be very preliminary, exploratory research, but we must be very weary of generalizing these results to a big scale.
Step 5: What do the results actually say?
Once you start looking through research papers you will start to see a certain structure they follow, which may help orient you if the language of the study is a bit intimidating and technical. There is an abstract that summarizes the paper. This is usually a good start, as it usually avoids overly complicated, technical terms. If the full text of the article is not available online for free, this may be the only thing you have to go on. Not bad, but its good to have all the details from the full text if possible.
If you do have access to the full text, there is always a results section near the end, and a discussion section. The results section is worth taking a look at, and the discussion section will try to break down to what the results actually mean in the greater framework of the field, what we could do with this research in future studies, etc. There are both facts and speculations in this section, and more often than not, this is where quotes come from that feed the media appetite for groundbreaking findings and statements. Researchers will be VERY weary of making grand conclusions, as they know that they did one study of one specific thing in very specific circumstances, but they will say this may mean that or possible in the future in order to push research further. Research inspires research, so moderate speculation is healthy. Its when the media takes this as fact is where the problems begin.
Watch out for confusions between correlation and causation!
This is a big one, and often stumps scientists on big issues as well as small - is lung cancer just associated with smoking or does it cause it? Is climate change caused by human activity or simply correlated with it? On these issues there is a pretty clear scientific consensus - but what about smaller studies? There is a hilarious website and book that shows very clear correlations between the most surprising things, such as: per capita cheese consumption and the amount of people that die by being untangled by their bedsheets, the amount of people that died falling out of a fishing boat and the marriage rate in Kentucky, and US spending on science, space and technology and the number of suicides by hanging strangulation and suffocation. Obviously, simple correlation does not imply causation, but in some studies it may be hard to tell the difference.
Finally, watch out for omission of outliers.
It is sometimes easy to say that some data somehow got compromised as is no longer worthy of being mention in the study results. This is easy to notice if the study says that they started with 50 rats, and then in the results section, only discuss how 47 of them did in the study. Problem is, that those omissions are often done when some results don't fit the narrative the researcher really really wants to push through. Imagine devoting 10 years to a study, and everything turns out exactly as you want it to, and these results could make or break your academic career and there are just 3 rats that just died under weird circumstances.
Very easy to figuratively and maybe even literally, sweep those results under the table and pretend they are irrelevant.
Once you decipher the meaning behind all the scientific jargon in the article itself, you can then go back to the original news story and compare and contrast what the main point of each were.
Are all the facts accurately portrayed? Are all the key details of the study presented? What crucial information did they leave out? What did the story decide to focus on, the results or the speculations? Then you can finally make your decision whether this is a trusted news source and whether the story is okay to share.
So there are the rules of the game, focus on these. Your focus determines your reality:
- Who wrote the news feature and what website is it on?
- Where are the sources?
- Where was the research published?
- How was the study conducted?
- What do the results actually say?
That being said...
What should we NOT focus on when evaluating the credibility of a website or a health news story?
- How professional it looks. Of course, credible organizations should and probably will have nice websites, but anybody with an internet connection can make a professional website in minutes.
- Ranking in google results. Just because it came up first, just means that lots of people have clicked on it before, and its the best match for what you searched for. Google is a search engine, not a truth machine. If you search for wine is good for you or wine is bad for you you will most likely find evidence that supports your search statement because that is what you are asking Google to find.
- Comments or ratings. Though I love reading comments and reviews as much anyone, I'm sure you've also noticed that even for the simplest things on Amazon, there are completely polarized results - some people completely love it and some people completely hate it. Same with comments on websites - you don't know those people, you dont know what their expertise is. To a certain extent, you shouldn't judge a site or its content by its active audience, or the opposite - if there are no comments or likes, it doesnt mean a website isnt credible or actively read. A huge amount of the population are passive consumers on the internet.
- Quality over Quantity. Just because there is a lot of information, doesn't mean they are at a higher expertise level than a website that can express its content in a more concise manner. Its quite often the opposite.
At this point you may start to wonder, isn't there some law or ethics committee that is responsible for making sure the media isn't making stuff up? Well, I'm glad you asked. Let me tell you a story, from an article by a Gunther Eysenbach, who is recognized by many as one of the leading researchers in the field of eHealth and Internet & Medicine, titled Credibility of Health Information and Digital Media: New Perspectives and Implications for Youth. I highlighted this article so much.. it had more yellow than white by the time I was done with it.
Basically it went something like this:
In September 1999, one of the once-leading health portals, DrKoop.com, was criticized for lack of Web ethics. In an article published in the New York Times, the site (partly owned by former U.S. Surgeon General C. Everett Koop) was accused of inadequately distinguishing between news content and promotional content. For example, DrKoop.com published a list of hospitals designated as the most innovative across the country, not revealing the fact that these hospitals actually paid for the listing, among other things.
The incident sparked the development of a codes of ethics for health web sites. Such a code was developed, and I will link it in the discussion post. Published May 2000 in the Journal of Medical Internet Research, it had a list of guiding principles that "All who use the Internet for health-related purposes must join together to create an environment of trusted relationships to assure high quality information and services; protect privacy; and enhance the value of the Internet for both consumers and providers of health information, products, and services. It sets guiding principles such as Honesty, Quality, Candor, Privacy and Accountability, and they weren't the first, and daresay not the last. But, all these guidelines sound great, right? The problem? This was a policy proposal that was not made into policy. That means it has no legal status and no-one is obligated to follow it.
Doctors must follow the Hippocratic Oath to do no harm in their service, while online health communications media has free range with no system of accountability in place.
And this is exactly why the type of work we are doing here on this blog and on the podcast is so important. We must raise the general level of health and research literacy in the nation to stop the flow of misinformation. Join me?