Early on in secondary school, one of my science teachers taught me something. Yep, you say, sounds about right, that’s what they’re there for. The problem was, it was wrong. I went home, talked about my day, nonchalantly brought up this fact, and was told it wasn’t right. I protested, gave what I had been told were the facts, but was again told no. While a tiny moment in time, it’s stuck with me as the first time I realised that the “people in power” (whoever they are) aren’t the all-seeing, all-knowing geniuses I had assumed them to be.
This brings me on to a subject that has been playing on my mind for a while, helped in no small part by my position between science and the media. The amount of inaccuracies in journalism seems to be slowly increasing. Despite briefings with leaders in their fields as well as any number of opportunities to check facts, data ends up being used incorrectly, misrepresenting stories instead of bolstering them. I may be talking rubbish of course – a sample of one is not a sample at all.
This week delivered a blinder of a factual error. The Times and YouGov developed and released an estimate of this month’s election results. This was not a poll. Indeed, Philip Collins of The Times tweeted that it was definitely not a poll.
Then The Times splashed with “SHOCK POLL PREDICTS TORY LOSSES”, somehow misconstruing its own front page story.
Twitter justice swiftly followed, but the damage was done. The Times had shown that at no point in its editorial chain did it have a grasp of what it was actually reporting (or at least it failed to show that it did). Yes, they call it modelling in the text, but hey let’s not worry about the headline. Yet it’s not just The Times that mucked it up. The FT still has the modelling listed on its election poll tracker – you can tell it’s different from the rest as its sample is 50,000, compared to the ~2000 samples of the actual polls.
You’re probably thinking that I need to get out more, go for a pint, watch Netflix, generally just chill. You’re probably right. But this stuff is important. It is impossible to know what’s going on in the world if you can’t understand the data. And it’s also impossible to convince people of how science can improve the world if you can’t communicate it. Responsibility does fall in part to us inbetweeners yes, but there are many others in the chain – writers, sub-editors, editors – who all have a responsibility to be informed enough to effectively report on the world around them. I can’t help but wonder whether or not the ultimate goal of today’s current crop of media is straight and true; is it to educate, entertain and inform or are other influences at play?
To be fair on the media, they aren’t the only ones being impacted by other pressures. I had an email discussion with my Dad this week about the impact of cognitive bias (yes, I do need to get out more) as he’d been invited to a lecture about how it is impacting his field of work. Cognitive bias essentially means someone impacting their own scientific study by skewing the results (such as through data selection) towards what they want them to be, subconsciously or otherwise. The lecture called it “The elephant in the room of science and professionalism”, saying that sound and objective science is essential to the continued progress of society. Others are aware of this problem; liberal science demi-god Ben Goldacre has started a number of projects through the University of Oxford’s Centre for Evidence Based Medicine, such as COMPare, which aims to stop scientists misreporting clinical trial data by changing the hypothesis halfway through the test.
That this kind of thing goes on in the professional world staggers me. There might still be part of that naïve kid inside me, but I look at my colleagues around me and see people working tirelessly to do the best job they can. Is everyone else trying as hard?
Fancy a chat? Contact firstname.lastname@example.org