"Science is the only news. When you scan through a newspaper or magazine, all the human interest stuff is the same old he- said- she- said, the politics and economics the same sorry cyclic dramas, the fashions a pathetic illusion of newness, and even the technology is predictable if you know the science. Human nature doesn't change much; science does, and the change accrues, altering the world irreversibly." Stewart Brand, as quoted at EDGE

An important element of the journalist’s role when covering stories about health and science is to report, and sometimes interpret, research findings for readers. Many major news stories are based on the results of research and they can have a big impact on readers, particularly when they lead to concerns about health. Surprisingly often, though, such research turns out to be based on bad methodology or it transpires that a journalist has misinterpreted research findings. As a result readers are given information that is wrong.

Journalists who have a basic understanding of scientific research and know how to evaluate methodology have a great advantage over those who write stories by regurgitating the contents of press releases without applying any critical thought. By going back to the original research and studying it, journalists should be able to see the true story, not the story that publicists want them to see.

The media coverage of the MMR vaccine story demonstrates how ignorance on the behalf of journalists can lead to serious consequences. The reported findings of research by Andrew Wakefield linking the MMR vaccine to autism made front page news. By the time his research had been discredited, there had been a significant decline in the take up of these vaccines because parents had become convinced that they posed a risk to their children. If the journalists reporting the story had had a better understanding of medical research, they would have been able to recognise the shortcomings of Andrew Wakefield’s methods and the whole furore over the MMR vaccine could have been avoided.

Scientific and medical researchers have to follow very strict guidelines if their work is to be taken seriously by their colleagues working in the same field. Research findings cannot be evaluated or compared with other findings if transparent and replicable methodology is not followed. For example, the randomised control trial (RCT) is a recognised methodology that is used to test the effectiveness of drug and other treatments.

An example of how an RCT works in practice, in this case researching into low carb diets and blood pressure, can be seen here. This has been taken from the Behind the Headlines section of the NHS Choices website, which examines the research behind health stories in the media.

Researchers must submit their finished studies for peer review, a process by which research findings are submitted to a specialist journal and analysed and assessed by people with expert knowledge on their subject. Without peer approval, research has no credibility.

Even the results of studies that have successfully passed through the peer review process can only tell us so much because flaws in the research may not be obvious to others. The findings of a study may be inaccurate or misleading or may not tell the whole story. Researchers may not have followed the correct methodology or they may have made mistakes in their calculations. In some cases, the researchers may have preconceived ideas about the results of their studies and, not necessarily deliberately, may shape the results to prove their preconceptions (confirmation bias). They may have been funded by an organisation with a vested interest in the results.

Some years ago while working as a freelance journalist I was commissioned to write about the very impressive results of a study into the health benefits of a particular nut. Only after accepting the commission did I realise that the research was funded entirely by the society of American producers of that particular nut. With my mortgage payments in mind, I have to confess that I went ahead anyway. I still feel guilty.

The systematic review process is a more reliable means of assessing the accuracy of research findings. One research study cannot give us definitive answers, but if a number of studies into the same subject all have the same or similar findings, then the results are much more likely to be correct. The more studies that point to the same answers, the more accurate the information will be.

The systematic review process is not perfect. One serious problem is that researchers may not report the findings of studies that do not show the results that they want them to show (publication bias), as Ben Goldacre explains in this article. (If you are not familiar with Ben Goldacre’s Bad Science column in the Guardian and his books, check him out. He’s required reading for a science journalist.). However, the process is far more reliable than depending on the results of single studies.

The term evidence-based medicine (EBM) is used to describe treatments that are recommended because a review of all of the available evidence shows that the treatment is effective. The National Institute for Health and Clinical Excellence makes recommendations on this basis.

The research methods that apply to ‘traditional’ medicine must be applied equally rigorously to ‘alternative’ medicine. Far too many stories about alternative miracle cures appear in the press, many of them based on nothing more than the anecdotal evidence of two or three people who claim to have made a medically inexplicable recovery. If a homeopathic remedy has not been tested by means of a randomised control trial there is no way of knowing if it works. If it has been tested by an RCT and the evidence shows that it doesn’t have any more effect than a placebo, then it doesn’t work. A study published in the Lancet in 2005 looked at 100 clinical trials of homeopathy and discovered no evidence that homeopathic remedies are any more effective than placebos.

Scientific research is the human race’s way of trying to measure and understand nature and the universe. We haven’t got very far yet and our current methods may be inadequate for the task, but they are all we have and we are dependent on them for our expanding knowledge. Any journalist with a serious mission to enlighten his/her readers through genuine scientific insight needs to have a proper understanding of these methods.

Links

Randomised controlled trial


"A study to test a specific drug or other treatment in which people are randomly assigned to two (or more) groups: one (the experimental group) receiving the treatment that is being tested, and the other (the comparison or control group) receiving an alternative treatment, a placebo (dummy treatment) or no treatment. The two groups are followed up to compare differences in outcomes to see how effective the experimental treatment was. (Through randomisation, the groups should be similar in all aspects apart from the treatment they receive during the study.)"

Where possible, RCTs should be double blind studies: A study in which neither the subject (patient) nor the observer (investigator/clinician) is aware of which treatment or intervention the subject is receiving. The purpose of blinding is to protect against bias.

National Institute of Health and Clinical Excellence (NICE) Glossary. (www.nice.org.uk)

Systematic review


A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question.  It uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing more reliable findings from which conclusions can be drawn and decisions made (Antman 1992, Oxman 1993). The key characteristics of a systematic review are:
  • a clearly stated set of objectives with pre-defined eligibility criteria for studies;
  • an explicit, reproducible methodology; a systematic search that attempts to identify all studies that would meet the eligibility criteria;
  • an assessment of the validity of the findings of the included studies, for example through the assessment of risk of bias;
  • a systematic presentation, and synthesis, of the characteristics and findings of the included studies.  
Many systematic reviews contain meta-analyses. Meta-analysis is the use of statistical methods to summarize the results of independent studies (Glass 1976). By combining information from all relevant studies, meta-analyses can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review They also facilitate investigations of the consistency of evidence across studies, and the exploration of differences across studies.

Cochrane Handbook for Systematic Reviews of Interventions (www.cochrane-handbook.org)


Kim Rutter currently works in marketing for a research organisation. Her previous experience includes 15 years as a freelance journalist and 18 months as a lecturer in journalism on degree and postgraduate courses at Harlow College.

Kim’s work has appeared in a wide variety of publications, including Glamour, the Independent on Sunday and the Guardian.

Free daily newsletter

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).