The Sunday Telegraph reported this morning that:
“Stroke victims who are admitted to hospital are far more likely to die if they are treated outside central London, an investigation has found. The NHS statistics show survival rates for stroke victims sent to central London hospitals are 54 per cent higher than for those in some parts of the country.”
Fortunately, the ‘Health Correspondent’ Laura Donnelly has got her statistical knickers in a twist. She goes on to write:
“The death rate within 30 days of admission for stroke is 14.6 per cent in the capital’s central sites, according to analysis of the nine years’ data ending 2009 – compared with rates of more than 22 per cent in industrial cities and manufacturing towns”
So, in the poor industrial towns the survival rate is 100% – 22% = 78%. If London is 54% better (154% as good) that makes the survival rate in London 0.78 times 1.54 = 123%. Wow! In London, for every 100 stroke victims taken to hospital, 123 of them survive!
It is the death rate of the industrial towns that is 22% divided by 14.6% = 154%, that is 54% worse. But the vast majority of stroke victims survive, so the survival rate difference is not the same number at all.
Actually, of course the survival rate in London is 100% – 14.6% = 85.4%. The difference in rates is then 85.4% divided by 78% = 109.4%. So London survival is better by a whopping 9.4%.
Not 54%, then.
Never mind, Laura Donnelly, there will be a real scandal along soon if you wish hard enough. Perhaps an introductory high-school statistics text could be put on your Christmas list this year?
Much as I enjoy smoke free pubs and restaurants, I always took the view that I had a free choice of where to go of an evening if I wanted to avoid cigarette smoke. Admittedly, there were few locations that banned smoking, but that was a commercial decision of the proprietors.
Key for those who see the ban on smoking in enclosed public spaces as a small start towards banning smoking anywhere is any evidence that smoking in private homes and cars impinges on the rights of powerless third parties. So the news that passive smoking increases the risk of still-births by a whopping 13% and that of birth malformations by 23% was reported widely.
The BBC news site quoted the press release freely:
“Fathers-to-be should stop smoking to protect their unborn child from the risk of stillbirth or birth defects, scientists say. They looked at 19 previous studies from around the world.
A UK expert said it was ‘vital’ women knew the risks of second-hand smoke.”
Vital that women knew the risks? So what are the risks? The paper (abstract) was not primary research, but combined data from multiple studies, which sounds good. But most of the studies were either of poor quality or did not address the desired health outcomes. In the end, it came down to 19 studies with four separate outcome measures. Two of them, the risk of miscarriage and the risk of perinatal or neonatal death, came out negative: no increased risk. The other two came out with the 13% (4 studies) and 23% (7 studies) increased risk.
So, the news reports could have started with headlines of “Passive Smoking Does Not Cause Miscarriage” or “New Study Produces Contradictory Results”, or even “We’re Trying Really Hard But We Still Can’t Prove Passive Smoking is Particularly Dangerous”. Although I can’t imagine researchers from the UK Tobacco Control research Network policy advocacy group pushing that last one!
When researchers attribute risks to particular behaviours, they calculate not only the best estimate of the increased risk (eg an odds ratio of 1.13, or an increase of 13%), but also the high and low limits within which they are confident that the ‘true’ risk lies. Any measurement will have uncertainties, and to be confident that a risk is real it must be repeatable: that is, doing the whole study again will produce the same result.
Obviously, you can’t wait until the next study before you publish, so you use the mathematics of chance to see the results might have been if thing had gone slightly differently during the study. The outcome, then, is not a ‘best’ figure, but a ‘confidence interval’ which the ‘true’ result would be within 95% of the time. (or outside the range 5% of the time).
The study found that two of the outcomes had confidence intervals that started below an odds ratio of 1. That is, there is a real chance that there was no risk at all, even though the ‘best’ figure was higher. So the results are dismissed as not significant.
What of the other two? Stillbirth came out as 1.01 – 1.26 (middle value 1.13), with malformations as 1.09 – 1.38 (middle value 1.23). So, even without a further look, stillbirths could be increased by perhaps 1%, or as much as 26%. We can’t tell which, but we can tell that presenting 13% as the figure is misleading.
But it is worse that that. The researchers looked at many outcomes and picked out to publicise the ones which had the wanted results, which makes it far more likely that you will find significance in your results. As an example, let’s say that you roll four dice. The chance that any one of them will come up a six is 1/6 (or 17%), but the chance that at least one of the four will come up a six much greater at 48%.
For the researchers to be confident that their overall result of, say, ‘passive smoking causes harm to unborm babies’, an allowance must be made (eg the Bonferroni correction) for each of these multiple comparisons to get the overall confidence back up to 95%. For four tests as here, the intervals should be increased by the factor of around 1.27, so they become:
Still birth relative risk: 0.98 – 1.30
Congnital malformation relative risk: 0.91 – 1.40
Note that both now include the relative risk (ie no risk) in the range. On this test, none of the outcomes is significant.
Two Bites at the Cherry
The upshot is this. If you use statistical arguments to judge outcomes, you should know that the more measurements you make the more likely you are to come up with spurious results, so you should make allowances for it.
The headline should have been, at best, “Our Research Was Too Underpowered to be Sure of Anything, but it is Worth Asking for More Funding“.
Unlikely to be reported in the papers, but honest.
I normally avoid leaving comments on online newspaper articles as I don’t enjoy the anonymous behaviour of participants: rudeness, ignorance and unwillingness to engage in proper debate. But I did get stuck in to one of James Delingpole’s Telegraph Blog entries. (My spell-checker wants to replace ‘Delingpole‘ with ‘Delinquent‘. I’m tempted.)
Delingpole seriously embarrassed himself in the BBC’s Horizon programme Science Under Attack when he debated climate change with Nurse, a Nobel Prize winning scientist. Specifically, Delingpole described his climate change ‘journalism’ as interpreting interpretation: he didn’t read scientific papers, not even the abstracts.
More specifically, he has found a few people who share his biases and then uses their writings as evidence for his own opinions, as they use his to buttress theirs. ‘Science’ is a word used often, but the scientific method seems to be unknown to them as they resort to rhetoric instead. It seems that winning an ill-natured argument is far more important to them than actually being right. (They fervently believe they are right, of course, though they make no effort to develop secure lines of reasoning, relying on the whole list of pseudo-science techniques described here.)
The comments on the blog entries are even less nuanced, as they don’t even try to use rhetorical tricks and deceptions. If you have ever had so little going on in your life that you feel able to interact with the low-lifes that inhabit these sites, then you may skip to the end.
But this is the nature of argument from those that worship the self-important journalists such as Delingpole. Insults are the order of the day: anonymous posters are just rude. If you come up with a good argument, data that disproves a statement or even just try to act as a moderating influence, then expect to get flamed.
Ignore reasoned arguments
Tell the poster that their sort of person makes you sick and you can’t believe how much they wriggle and squirm in a proper debate. Tell them how thin skinned they are. If you are lucky, they will be distracted by your bilge and not notice that you had no answer to their line of argument.
Consensus Plays No Part in Science
If anyone has the front to point out that the specialists in the field are virtually unanimous in their judgements, so you are likely to be mistaken, bang on about the ‘fact’ that consensus plays no part in science. This is a great move, since you can act as an expert in your own right at the same time as denying real experts know anything about the reality of the science. It is, of course, nonsense. Science does not have authorities that pass judgement on theories when there is disagreement. The only way for tentative theories to enter the canon of accepted principles is for them to be debated back and forth along with the data in journals and at conferences, until everyone has had their objections answered and consensus is reached. Far from ‘consensus plays no part in science’, a lack of consensus is fatal to the progression of a scientific theory. Consensus is the only way in science.
Apply Different Standards of Evidence to Opponents
Appear to carefully pick apart statistical inferences with which you disagree, then slip in a non-sequitur based on an absence of evidence. For instance, challenge the last fifty years of warming by selecting your data from one of the regularly occurring decades where the warming slows or stops for a few years, say that there is no statistically significant warming. If there is warming, pick a new start year that is especially warm and try again to fit a negative gradient. Ignore the fact that the correlation is very weak (r=0.1) and insignificant. Try the line that since warming is not proven, so cooling must be happening. And add an insult as a diversion so no-one notices the sleight of hand.
Libel the Experts
Repeatedly point out that some of the experts are actually computer modellers, chemists or physicists, not ‘climate experts’, and make claims that they are in the pay of large governmental and NGO conspiracies. Refer to your own sources as ‘renowned climate experts’, even if they are retired engineers or computer modellers. (‘Renowned’ is the give-away term, as no reputable scientists refer to anyone as renowned.)
Quote Your Own Consensus
Quote a big, long list of scientists who signed up to an online statement supporting your view, but don’t worry if none of them are actually working in a related field of study. As long as they give academic titles and put PhD after their name, they are scientists, right? And don’t call it a consensus, as you have already claimed that consensus is not part of science.
Hide Contrary Views
To force recent posts that challenged your statements off the bottom of the first page, find a contrarian web site and cut and paste large chunks of it into your posts. This has the bonus of not requiring any thought whatsoever on your part. When the offending posts have disappeared, you can repeat what you wrote before, secure in the knowledge that new readers will not see that there are good reasons not to trust what you say.
This was the first time I tried to sustain interest in a blog comments section for a couple of days, and there were over a thousand posts in that time (some commenters seemed to post continually day and night – didn’t their mothers tell them to come up out of the basement and go to bed?)
I tried to direct arguments towards a discussion of evidence, towards an understanding of the statistical limits of certainty, towards the problematical bias of picking an opinion and searching out individuals who support that idea instead of dispassionately assessing opinions and evidence in the round. But it was for naught.
Delingpole told Paul Nurse in the Horizon programme that he didn’t read proper research papers, because peer-to-peer review (clever, huh!) was an improvement on peer-review because it allowed journalists and anyone with an interest to get stuck in.
And he said it with a straight face!
The BBC’s reporting is going backwards. For years it was my go-to news site as it always sidebar links to all the websites of the source material for a news story. Even if the reporting was poor, I could always read the original papers or quotes. But a recent revamp of their site has dropped all the external links! Was it too hard for the journalists to a keep a track of where they got their material?
OK, I’m being a little unfair, since their main source has been press releases for some years – it is amazing the similarity of the wording when you can read the original press releases on AlphaGalileo. They have been republishing company and university press officer propaganda with barely a change for years.
Don’t just take my word for it: if you want a pithy and knowledgeable statement of all that is wrong with science journalism, you can’t do much better than read this, by The Lay Scientist blog at the Guardian.
That post is a great parody of most of the British media’s science output, fitting all research into a bland identikit structure that neither educates the masses nor informs those who already know something about the subject. The masses do not benefit from the patronisingly shallow overview that is so simplistic that even the basic principles are left out as potentially too confusing. The educated do not benefit as there is not even a useful link to the research abstract , the researcher’s homepage or even the organisation involved.
And all the “important” words are put in “scare” quotes, so the “journalist” does not even have to take responsibility for the words they have “written”. The parody is great, and the writer has now followed that up with an inside, in-depth analysis of why the mainstream media, the BBC included, sadly, has allowed itself to abandon the honourable traditions of scientific journalism.
Depressingly, a lack of money to do the job properly is not on the hit list, but the journalists themselves are. A picture is painted of aimless journos wandering around conferences being distracted by all the unpublished PhD poster work in foyer, and not understanding a word of what they are told.
A great comment from the piece repeats and comment from Ed Yong sums it up:
“If you are not actually providing any analysis, if you’re not effectively ‘taking a side’, then you are just a messenger, a middleman, a megaphone with ears. If that’s your idea of journalism, then my RSS reader is a journalist.”
Thanks to Toby Marshan for drawing my attention to this blog.
Edit 2010-10-10: The BBC has just updated its online linking policy to repair some of the damage mentioned above, described in this Guardian blog post.