ESSA

ESSA

Statistics, social science, and the media


Mike Pottenger

By

April 26th, 2013


In research, definitive answers are anything but. Only cautiously careful reading can beget the full picture.


Everybody likes to hear a confident opinion. Take Truman’s wish for a one-armed economist, for example, who would be unable to tell him ‘On the other hand…’. Headlines are often written with a confident certainty. Qualifications and conditions aren’t as convincing. The problem is, it can be difficult to distil even a single scientific study without distorting the complexity of the work.

Unfortunately for scientists (whether social or natural), while people prefer statements delivered with certainty, we are in the business of uncertainty. Our methods are based on scepticism, our conclusions full of caveats. Unlike Truman’s fantasy economist, we do have another hand and are often obsessed with what might be on it.

This is why if someone conducts a study showing one kind of relationship, others will be eager to see if they can find the opposite or no relationship. But this healthy scepticism is sometimes transmitted to the broader public in unhealthy ways.

Wait, what do you mean by ‘healthy’ or ‘unhealthy’?

When someone conducts a study showing one kind of relationship, it’s healthy that scepticism drives scientists to try and find the opposite or no relationship. But the way in which scientific research is reported – from the selection of stories to the headlines used – can create an unhealthy scepticism towards science in general.

For example, when the media focus on an isolated study demonstrating some extreme and unusual finding, which is shortly thereafter contradicted by another, the credibility of science suffers (Ben Goldacre discusses this topic from the two minute mark here, and Dan and Dan demonstrate part of his argument nicely in The Daily Mail Song). This can also often skew perceptions of the balance of evidence (there may many unreported or even unpublished supporting studies for one side of the debate, and few or none for the other).

Consider, for example, all that “lead causes crime” business earlier this year. In case you missed it, in January Mother Jones featured Kevin Drum’s article titled ‘America’s real criminal element: lead’. He argued that lead exposure, particularly due to leaded petrol, was responsible for a 20th century American crime wave. A few days later, Discover ran Scott Firestone’s piece ‘Does lead exposure cause violent crime? The science is still out’.

You could be forgiven for having shrugged your shoulders and walked away. You may just have had time to glance at these headlines, and they make it clear that the pieces were opposed, right? Well, actually, there’s a lot here that both pieces agree on, and this issue shows that even when science is debated in public as it should be, it’s easy for people to walk away with the wrong impression about what’s going on.

Drum and Firestone’s pieces with their contrasting headlines both have figures and cite formal academic research, so doesn’t this just go to show that science and statistics can be twisted to say anything you like? I mean sure, Drum probably found a correlation between lead and crime, but everyone knows that correlation doesn’t imply causation, right?

So did Drum forget that correlation is not causation?

This commonly-known cautionary statement can easily become a refrain for the off-hand rejection of research, even when it points to correlations that are of sufficient interest or importance to warrant future research (as XKCD puts it, “Correlation doesn’t imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing ‘look over there’”). This is a good thing, as it places the burden of demonstrating causation where it should – on those reporting the results.

correlation

Understanding correlation http://xkcd.com/552/

Drum shouldered that burden well. He explicitly addressed the issue, explaining in detail some of the different studies conducted that found different results. He argued that when a range of diverse studies of reasonable quality, some using different measures, all find evidence of the same relationship, then it’s unlikely that it’s just correlation – something more is likely to be at work. And Firestone did not directly criticise Drum for conflating correlation and causation.

Oh. OK. But Firestone must be proving Drum wrong, then, right?

Nope. Firestone didn’t claim to find evidence refuting Drum’s conclusions. Yes, Firestone noted one aspect of one study that showed no evidence to support Drum’s hypothesis. But after this was interpreted by others as a flawed argument based on the absence of evidence, Firestone clarified that the intention was to emphasise that Drum’s conclusion was premature in the absence of more studies tracking individuals over time (as opposed to those comparing different groups of individuals at the same point in time), not to claim that the absence of evidence proved Drum was wrong.

So I guess Drum thinks it’s a proven fact and no more research is required?

Wrong again. In a response, Drum agreed that more research is warranted, though he pointed to the wide range of different studies showing similar results and concluded that the varied research into this topic provides the best explanation yet for the crime wave in question.

Wait – what? Do they disagree or not?

Yes, they do. Basically it comes down to how confident Drum can afford to be on the basis of the available evidence. Drum reviewed evidence from a range of studies, made the case that programs to remove lead from the environment should be undertaken immediately, and portrayed that recommendation as a win-win and a no-brainer.

Firestone felt Drum’s language (referring to the lead-crime link as ‘blindingly obvious’, for example) was overconfident, risked oversimplifying the complexity of the link between lead and crime, and that there was no reason for ‘calling the violent crime link closed with such strong language’. Drum acknowledged that his piece had ‘involved a fair bit of handwaving’, with the intention of provoking further, more detailed research and analysis.

So you’re saying they were just splitting hairs?

Not at all. The differences between their positions aren’t trivial. But in spite of these, the two writers are at different points in the same part of the spectrum of scientific reasoning, rather than polar opposites. These were disagreements at the margin regarding how best to go about strengthening an already compelling case. When challenged, each clarified and/or moderated their positions.

Ultimately, this back-and-forth has been civil, constructive, and a reasonably good example of how the internet can provide a better medium for this sort of back-and-forth discussion than print media. Why? Because it’s easier to go deeper than the headlines and for a reader to check and see exactly what was said where and by whom, and find links to the original research (another point emphasised by Goldacre).

So does this mean we’re entering a golden age of scientific journalism?

Alas, no. No matter how hard people work to communicate science clearly, there will always be the problem of needing a confident headline that will grab attention.

More accurate titles in this case would have been ‘We now have good reason to prioritise and conduct further research into the substantial evidence that lead is America’s real criminal element’, and ‘While there is compelling evidence in support of the lead-causes-crime hypothesis, until we obtain more conclusive evidence of the relationship Drum describes in longitudinal studies, the science is technically still out’. But these are nowhere near as catchy, are they?

The views expressed within this article are those of the author and do not represent the views of the ESSA Committee or the Society's sponsors. Use of any content from this article should clearly attribute the work to the author and not to ESSA or its sponsors.

  • Monika

    Brilliant article Mike. Thought you might also appreciate the following comic http://www.phdcomics.com/comics.php?f=1174

    • Mike

      Nice!

      My favourite one on this sort of thing is another XCKD which I hope to work into a piece about hypothesis testing later in the year:

      http://xkcd.com/882/

  • Sharon Lai

    Thanks for this article Mike! It certainly highlights one of the important issues researchers face when disseminating the results of their studies, and the role of the media in shaping public perception of science. The public would definitely rather hear about a conclusive and groundbreaking finding (catchy title included) than a list of caveats! If the results of scientific studies do make it into mainstream media outlets though, I think it’s inevitable for reporting to be deliberately framed in a way that generates the most readership interest. Unfortunately, this can lead to unhealthy scepticism towards scientific research as you noted, but at least we’re seeing more widespread distribution of knowledge!
    P.S. Monika: love the comic!

    • Mike

      If you’re interested in this stuff you should definitely check out Ben Goldacre. He does science rather than social science per se, but he’s all over this stuff. He’ll be speaking in town soon.

Founding sponsors

 

 

Partner

Gold sponsors

 

 

Silver sponsors

 

 

 

 


Affiliates