Journal Club -2. How Accurate is Scientific Reporting in the Media?

Thank you to everyone who attended journal club 1. The discussion was lively and very well thought through. I hope it stimulated further thought and showed that even the seemingly simplest of questions can provoke a range of thoughts and ideas. The quality of your ideas was exceptional, and remember on this subject we could only offer opinion and everyone’s opinion is equally valid. The next topic should be more straightforward and in the spirit of journal club it involves a journal. Give yourself plenty of time to work your way through and remember that at this stage the most important criteria is to stimulate thought and give you an insight into the workings of Science. You don’t have to understand everything.

 

Data is data and the numbers can’t lie. So produce a set of results, or better still a graph of them and you’ve just done Science and proved your hypothesis. That’s Science and everyone should be happy.

Take a look at the table below of approximately four and a half thousand people, that addressed the hypothesis that people who drink have a greater chance of getting lung cancer than those who don’t.

Table 1

What does the data show? What conclusions can be made? Are you convinced by it? If not, what would you like to see in order to convince you? Try and answer this before looking at the next set of results.

If you take the same set of results shown in table 1 and add another filter to them to separate out the drinkers and non-drinkers into those who also smoke and those who don’t smoke, we get tables 2 & 3. (Remember that this is the same study group as table 1)

Table 2

What does the data show? What conclusions can you make?

What is the odds ratio of smoking and lung cancer?

This correlation is not a surprise as it is also backed up with lots of other studies linking smoking to lung cancer, but imagine if no such correlation had every been considered, raw data could completely misrepresent the true facts.

This is taken from a comment article in The Guardian by Ben Goldacre and I suggest you follow the link to read the full version.

I now want you to look at another column by Ben, How far should we trust health reporting?

What was the hypothesis behind the study?

What methodology was used in this study?

What are the conclusions?

Are you surprised by the conclusions made and what impact does it have on the public’s perception and understanding of Science?

Read the following paper published in the British Medical Journal (BMJ). It is very heavy on statistical evidence and confidence intervals which you can ignore if statistics is not your thing. At this stage I just want you to get a feel for the way the research was conducted and what their findings were. Those of you considering a Medical Career should also link to the bmj site and take a look around.

bmj.g7015.full

Questions to help guide you through the paper

  1. What is the point of the study?
  2. What methods were used in the study?
  3. What 3 main categories of reporting were looked at by the researchers?
  4. What did they find in each of these categories?
  5. What evaluation is made by the researchers on their own study?
  6. At the bottom of the main report (just before the references) there are a few paragraphs of ‘small print’ that start with, we thank our….  What is the purpose of this? Does it help in your judgement on the statements made in the paper?
  7. Finally, how much do you trust what you see, read or hear in the media about, ‘Scientists say…’

Remember that this is just to aid discussion and frame your thoughts. You can follow all, some or none of this list. The point of this is to stimulate thought and share ideas so PLEASE POST YOUR THOUGHTS BELOW prior to our next Journal Club meeting.

Advertisements

12 thoughts on “Journal Club -2. How Accurate is Scientific Reporting in the Media?

  1. Journal Club 2
    The table shows that drinkers think that people who drink are more likely to develop lung cancer over those who don’t whereas non-drinkers think that there is a less chance of you getting lung cancer if you drink. I don’t feel convinced by this as it is not fitting to the stereotype that I would imagine the results of this experiment to be. To convince me, I would want data that said people who go out drinking a certain amount of nights a week compared to those who drink none or drink less.
    The next table shows that smokers think that drinkers will more likely develop lung cancer whereas non-smokers think that drinkers are less likely to develop lung cancer. This again, isn’t what I expected the data to show and maybe smokers and drinkers are more aware of the dangers that they face when they smoke and drink more than those that don’t. Smoking accounts for 30 percent of all cancer deaths and 87 percent of lung cancer death.
    The hypothesis of the study was to test the credibility of the health claims made by the UK’s top 10 best-selling newspapers in one week. Their methodology was that they looked at the evidence behind all of the 111 health claims in the UK’s top-selling newspapers in one week and gave each health claim a credibility rating using the WCRF and the SIGN categorizing systems. To conclude, 62% of the claims made were ‘insufficient’ by the WCRF, 10% ‘possible’ and 12% ‘probable’ with only 15% being ‘convincing’. I am not surprised by this result as most health claims tend to be false but the fact that these claims with a substantial lack of evidence change the public’s opinion is staggering as we are being told lies and theories without real support for them.
    The point of this study was to find out whether the media played a role in influencing scientists and doctors and whether this had consequences on the mentality of the public towards doctors and the scientific community. Firstly, they identified press-released publishes issued by Russel group universities that concerned human health and located the sourcing of these claims and coded them by identifying what the article said and what the actual press-release said and noted whether it had mentioned all key aspects of the press-release. They found that most news articles were 6.5 times more exaggerated than the press-release so that can have an effect on the reader as they believe something that is preposterous and untrue which can make people reluctant to trust the media/ their doctors if they realise that information like this isn’t always scientifically correct.
    Other examples of scientific reporting in the media is Dr Oz in the USA which was a show whereby Dr Oz endorsed certain miracle/ magic drugs/products that had amazing claims about what they did to help you with problems such as cancer, fat loss and muscle enhancement. In this case, this is pharmaceutical companies paying doctors to endorse their products for profits they will gain by people using these drugs. This means that health claims made could be made because of an ulterior motive that isn’t just to catch people’s attention.
    Finally, I wouldn’t in initially believe insane claims made by newspapers and I would probably look further into the claim to be sure of its truth but even in press-releases you have to be wary of biases and ulterior motives which can affect the result of the research.

    Like

    1. The first tables show that drinking causes lung cancer but then the second tables show that it is the smoking that causes lung cancer and that the first table doesn’t take into account the correlation between people who drink and people who smoke and the fact that you are more likely to do one if you do the other. Tables like the first one are misleading as they fail to be specific enough about the types of people they ask and whether there are other causes to the trend outlined.

      Like

    2. I like your link to Dr Oz and how science can be used as a source of validation for some ‘dubious’ products or practices. You misunderstood the table of data in the blog, but you realised that and corrected it in your response below. Well done with your depth of thinking and your contributions to the Journal Club discussions

      Like

  2. Journal club

    In Table 1 the data shows that if you consume alcohol then the chances of you getting lung cancer is more than three times as likely. Without drinking alcohol the chances of getting lung cancer from the studied group is 5%. Whereas the chances dramatically increase to 16% if you drink alcohol (from this specific sample size). From the data given, I can conclude that that the chances of getting lug cancer is dramatically increased by the consumption of alcohol. Furthermore, if you want to reduce the risks of getting lung cancer you shouldn’t drink. Personally I am convinced by the data because I already know that alcohol can enhance the risk of getting cancer. Although I am unsure whether the sample size is big enough to draw an accurate conclusion. I would make the sample size of each at least 10 thousand to make sure the results are credible.

    The data shown in the second set of results has added another variable (whether the person smokes or not). This data shows that smoking is a much bigger factor to the likelihood of getting lung cancer compared with consuming alcohol. However if a person drinks and smokes the odds of getting lung cancer is actually the same- 33% chance. Furthermore, the data also shows that the non- smokers category has a significantly less chance of getting lung cancer- 3% chance. This allows me to make the conclusion that smoking is a far bigger contributor to lung cancer than alcohol. Looking at the data from Table 2 and Table 3, the evidence highlights that I can conclude that alcohol has very little effect on lung cancer. I know this because the odds of cancer doesn’t change if you are a drinker or a non- drinker in each of the tables.

    The odds ratio of smoking and lung cancer is 1.0

    The hypothesis behind the study was that when scientific claims are wrong, they’re normally very wrong.

    First of all they collected lots of stories from around the news and the started to check the evidence behind every claim. They bought very one of the best selling UK newspapers every day for a week. Then sorting through them all and finding every story that had any kind of health claim about any kind of food and drink. They then graded the evidence by using both the Scottish Intercollegiate system method and the World Cancer Research fund scale.

    From this test they found out that 111 health claims were made in those papers that week and majority of these claims had insufficient evidence. Only 15% of the evidence was classed as convincing. Less insufficient claims were made in broadsheet newspapers compared to tabloids but the margin was still very slim. It concludes that the vast majority of papers making health claims are supported by very little evidence.

    1. They aimed to clarify how often news articles go beyond what the evidence says in the peer reviewed journals published.
    2. They identified all of the articles based on studies that could be related to human health. They picked up on any articles that took a journals idea and over exaggerated it to sound like something else. Furthermore if the evidence had been tested on other animals and not humans, would the papers make the link that the same results would happen on humans.
    3. The three main categories of reporting were looked at by the researchers were exaggeration rates in press releases, association of news exaggeration with press release exaggeration and the effects of exaggeration in press releases on news uptake.
    4. They found out that 40% of the press releases contained more director explicit than did the journal article. They also found out that 33% of the primary claims in press releases were more strongly deterministic than those present in the associated journal article.
    Over 36% of news contained more direct or explicit advice than the journal did.
    Differing from their expectations, the proportion of press releases with at least one associated news story did not differ significantly between press releases.
    5. I think that they do this to provide evidence that not only one person who was biased on the topic made the article. It shows that many experts but their own work into this to provide an accurate and credible account. Personally, I think it makes the work more credible but it doesn’t affect my judgment on the paper that significantly.
    6. I used to believe quite a lot of the articles made based on scientific evidence as I thought that people wouldn’t be able to alter the journals they have go their information from. But from now on I don’t think I will trust the information in the news as much as I did.

    Like

    1. I like your honesty in your response that you initially fully believed the first set of data. (There was no reason not to). But I also like the depth of analysis and the detailed response to each point in the blog. You seem to be getting a lot for this analysis. Well done.

      Like

  3. When considering the science presented in the media, it appears that you have to be extremely careful and watchful for what is true, and what is published for publicity – extremes that should generate interest. Despite this, it should not be ignored as there is going to be some useful nuggets of information that come from the media that are relevant and important enough that more people need to know about it outside of that field. For me, it seems like the most important thing to do with a piece of research is to look it up away from the biased of generic media (newspapers) and more at deeper research that isn’t corrupted and twisted in order to appear as something it is not.

    For me, the true problem with the media is that what they publish is being accepted – the general public aren’t going to go and research the truth about something they read in the newspaper. While we complain, we need to become more aware and put more pressure on the media to release truths by ensuring that any falsities are rejected. The media is not wont to change at the moment as they are making money from these stories without being criticised. We need to accept that there is some pressure on us as readers and part of society to raise the standard as otherwise nothing will be done.

    If scientists as a group want the media to stop using fictitious documents for their purposes, we need to make sure that the conclusions are clear and that the media don’t feel able to publish lies as there would be exterior pressure to publish truth.

    Like

    1. A well considered response to the problems of scientific reporting in the media. I think your understanding of the problem is expressed in a well argued and logical way. Well done.

      Like

    1. A thoughtful and well reasoned analysis of the two tables and the problems of scientific reporting in the media. Perhaps what you didn’t realise from the paper is that this quite often arises from exaggerated press releases composed by the Scientists involved in the study and the press officer of the institution involved often as a way of promoting their work. Keep this in mind when you look at the next Journal Club task.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s