Article Text


Reporting science

Statistics from

THE past month has seen the publication of two reports discussing the way science is communicated: one, from the House of Commons Science and Technology Committee, examines the role of peer review in scientific publication; the other, by evolutionary biologist Steve Jones, considers how science is reported to the public.

Professor Jones's report was commissioned by the BBC Trust and looks specifically at the impartiality and accuracy of the BBC's science coverage.1 It provides some fascinating insights into the corporation's culture, particularly in relation to reporting ‘controversial’ topics. However, given that the BBC's coverage of science is generally held up as being exemplary, many of its comments could be said to apply equally, or perhaps to an even greater extent, across the media. The report is well worth reading, not just by journalists and scientists who might be concerned about how their work is reported, but also more widely. Topics such as BSE, control of foot-and-mouth disease and bovine TB have had a high media profile over the years, and while the report does not deal specifically with these subjects, it goes a long way to explaining why they get reported in the way they do.

One of the more interesting arguments advanced in the report is that the BBC's strict rules on reporting and a desire among journalists to achieve balance and be impartial in their coverage of science can perversely have the opposite effect. This, Professor Jones suggests, is because in attempting to present both sides of an argument, and in view of the often confrontational nature of much general reporting, journalists can give the same weight to what are essentially unsubstantiated beliefs as to scientific arguments for which an overwhelming body of evidence is available. This, he suggests, is to misunderstand the way science operates, and can distort the coverage that results.

Also of interest are his remarks about the types of stories that get covered. Most of the science stories that appear in the British media originate from press releases produced by a relatively small number of journals and organisations, and they tend to deal with a relatively small range of topics or only part of a topic, rather than reflecting the scientific literature as a whole. To an extent this is understandable given the time and other pressures on journalists, but he suggests that they could be more proactive in seeking out stories and make more use of the resources now available for searching the scientific literature.

As far as coverage of biomedical research is concerned, he notes that the focus tends to be on clinical topics rather than subjects such as molecular biology in which a great deal of effort and public funding is being invested and that, in cases where fundamental research does get reported, there is a tendency to ‘clinicise’ the findings and be overoptimistic about the potential benefits. More generally, he notes that subjects can sometimes be presented as being controversial when, in scientific terms, the issue has been settled and controversy no longer exists.

The Science and Technology Committee's report on peer review,2 which is also worth reading, is relevant to the arguments put forward by Professor Jones, because it is largely on peer review that the credibility of science is based. Professor Jones acknowledges that peer review is not perfect, and the select committee's report confirms this, suggesting, among other things, that ‘pre-publication peer review has evolved in a piecemeal and haphazard way to meet the needs of individual scientific communities’ and that ‘considerable differences in quality exist’. It notes, however, that editorial peer review ‘is considered by many as important and not something that can be dispensed with’ and makes a number of recommendations on how the arrangements could be improved.

The Science and Technology Committee makes some interesting observations on journal impact factors, expressing concern that some institutions may be using these as ‘a proxy measure’ for the quality of research or of individual research articles.

Taken together, the two reports serve to emphasise that reports on science should be interpreted carefully, bearing in mind the processes they have gone through and the extent to which the information provided can be relied upon. Reports should be critically assessed on the basis of the evidence presented, whether they appear on television or radio, in newspapers, scientific journals or on the web.

View Abstract


Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.