Learning:
True Facts, Maybe
Matt Waite thinks epistemology (and a little fake software) could save journalism—here’s why
There is little so beloved among journalists as The Truth. If there were anything approaching religion in the newsroom, The Truth would be the deity. It feels good to be a part of The Truth. It gives your work a greater purpose. A nobility of sorts.
But is this what we really seek, The Truth? Are we faithful congregants? I see a lot of data journalism that makes me wonder. Do we really understand Truth? Is that even the goal of what we do? And are we doing enough, as data journalists/app developers/data visualizers to arrive at The Truth? To facilitate it? As a sinner, I wonder.
So, I’ve been wallowing in the Liberal Arts side of my head lately. I’ve been wandering the philosophy section on the internet, reading about Truth, Knowledge, and epistemology, trying like hell to get out of the very practical, mission-focused side of me and really think about why I’m uncomfortable with a lot of data journalism lately. And I think I have it.
It’s doubt.
Too often, we use numbers with blind faith, as if they were handed down from on high and we mere mortals are not to question them. We do not communicate doubt in our own findings, or anyone else’s for that matter. Most of us recognize that data is dirty, flawed, and noisy. We admit that often the data we have has a point of view—it was gathered for a specific purpose, and that purpose isn’t always our purpose. Lately, an uncomfortable amount of the data we write about has been poorly gathered by a marketing company hoping we’ll take the bait. Also, we know that human beings often gathered this data, which means there are errors in it. But how often do we communicate that to readers?
Rarely.
Even if we do try to communicate doubt to readers, I wonder if we possess the right language to express it. My head was sent spinning reading Jordan Ellenberg’s How Not To Be Wrong this summer. The book is filled with all kinds of mathematical thinking, but one of the most powerful parts of it is where he explains just what statistical significance really is. Put plainly, it just means something is extremely unlikely to have happened by chance. If you’ve studied statistics, I think you probably know that. But, if you’re like me, that didn’t completely do away with the loaded meaning of “significance.” You, like me, thought statistical significance meant it isn’t random and that Means Something, which may not be the case at all. It might be statistically significant and utterly unimportant.
How many of our readers know that? How many of our readers know to even question that?
Most often what passes for data journalism these days comes in the form of strident, confident, and declarative statements with Findings You Won’t Believe. Or we blithely claim we can explain some massively complex situation in X number of charts, where X is almost always less than the number of fingers on one hand. And we leave absolutely nothing to question. Our need for Authority demands it.
I think the effects of this are becoming clear. And they aren’t good.
A recent warning sign, if you need one, came from Margaret Sullivan, the New York Times’ Public Editor, who asks “Why Didn’t the Data-Driven Media See Eric Cantor’s Defeat Coming?” The question implies that because there’s data in the world, data journalists should have seen what is a colossally rare thing coming. A sitting party majority leader losing to an economics professor who raised a tiny fraction of the incumbent’s cash and skipped campaign events because of finals week at his university—I mean, who couldn’t predict that?
I believe questions like this come from the mistaken notion that data means there’s no doubt about what comes from it. Numbers = Truth. And it’s our collective tendency toward the declarative that fosters this idea.
Recently, Zeynep Tufekci got at some of my angst in criticizing 538’s blown prediction of Brazil beating Germany in the World Cup. 538 had Brazil winning it all, and even without their most talented player and team captain, they predicted a win over Germany in the semi-finals. Final result: Germany 7, Brazil 1.
“All measurements are partial, incorrect reflections,” Tufekci wrote in a post you should read. “We are always in Plato’s Cave. Everything is a partial shadow. There is no perfect data, big or otherwise. All researchers should repeat this to themselves, and even more importantly, to the general public to avoid giving the impression that some kinds of data have special magic sauce that make them error-free. Nope.”
And the Truth Shall Set You Free
This jeremiad started with my worry about our struggles with recognizing and communicating the difference between capital-T Truth and truth, lowercased.
The canon of philosophical thinking about truth, to put it mildly, is long. And unless you’re a really big fan of that kind of writing, it’s taxing. I’ve found math textbooks that were more readable. You’d think something so fundamental as truth would be a simple thing, but you’d be very, very wrong. To save you the time, there are a ton of theories of truth, and among the first is the correspondence theory of truth. Simply put, the correspondence theory says that what we say or think is true so long as it corresponds with the facts. Simple right?
Wrong.
Let me ask you a question: The U.S. Census Bureau says that my town of Lincoln, Neb. has 265,389 people in it. Does it?
Answer? Yes. And no.
Let’s peel back some layers here. The U.S. Census is mandated by the Constitution, and requires the government to count every man, woman, and child in the country every 10 years. Courts have held that the Constitution literally requires a headcount, meaning no sampling or guessing or imputation or anything.
So, the whole thing is doomed from the start. Homeless people are hard to find, undocumented immigrants don’t exactly want to chat with government agents, and the same goes for tin-foil hat conspiracy theorists/sovereign citizen/anti-government folks. But the government tries anyway. And they publish a number. That number is the number of people they counted on April 1 of the first year of that decade (2010 was the last one). Now, that number is published long after April 1—the first numbers usually come out about 8 months after and then more trickle out over time.
The problem? One day after April 1, the data, which is wrong to begin with, is wrong-er. How? People die. People are born. People move. People go to jail. Life goes on. The data does not.
Between the decennial census, the Census Bureau uses sampling methods to estimate the population year after year. Those numbers have a sampling error, which makes wrong numbers that are frozen in time…something.
But does Lincoln have 265,389 people? Yeah, around there. It’s about right. It’s true, not True. It’s as correct as we have any hope of finding out. It’s true without being scientifically, according to Hoyle, 100 percent accurate.
And that’s the difference between Capital T Truth and lowercase t truth. Capital T truth, by some philosophical constructs, exists only in the realm of religion or science, if at all. Things like “God is good,” “light moves at 299,792,458 meters per second,” etc., are Truth to those who believe. They are the incontrovertible basis of a life’s work for those who believe: a foundation, core. All else is truth.
Space Cameras and Epistemic Justification
Where my mind was opened to this was in remote sensing—data pictures taken from space. And it was a professor at the University of South Florida, Barnali Dixon, that let me into her remote sensing class, who blew my mind with this Truth vs truth idea.
Think about it this way: Walk outside. Find a tree. Walk up to it. Describe it to someone who has no idea what a tree is. In describing it, you’re likely to reach out and touch it, right? Because you can touch it. You can look closely at it. You can interact with it. You know things about it. It is real to you. With sensors, you can measure things about it, and you can do so easily because you are standing next to it.
Now, take a picture of that tree from space and do the same thing. Given a powerful enough sensor, you will be able to see it. You’ll be able to describe it in meaningful ways. You can say things about the leaves, or the size of the tree. But, there are thousands of feet of atmosphere between the tree and the sensor. There are changes in the air density and makeup, in particulate matter in the air, in cloud cover. All of those things—and more—can alter the data recorded by the sensor and analyzed by you. Greens aren’t as bright, infrared radiation values get skewed, what you know about the tree changes, if only slightly. You will perceive the tree differently than you will standing next to it. Those descriptions may be true in the light of the data you have, but they are not True in the way being able to touch the tree makes them True. They will not match the sensor values you got standing next to it. It’s the consequence of the remote part of remote sensing.
In other words, remotely sensed data will tell us things about the world that are broadly true, but they will never be True. They can’t be. There’s too much distance between the object and the sensor. Too much to go wrong. But that doesn’t make what the sensor tells us wrong. It makes it something else. It introduces doubt.
And that brings us to epistemic justification. Epistemology, broadly speaking, is the branch of philosophy that is concerned with knowledge, and how we know what we know. Epistemology, like Truth, is another area of philosophy that’s dauntingly large and complex. But epistemic justification can be boiled down like this: Knowledge is when we believe something to be true and we have something to justify that belief, like facts or observations.
Data journalism spends a lot of time on the first part of epistemic justification—believing something is true and writing about it as if it were—while giving incomplete attention to the second part: the justification for that belief.
Before I get too far into philosophical debates, let’s talk about journalism again.
For a story, I spent 10 months analyzing Landsat imagery of the state of Florida, looking for wetlands that had been replaced by urbanized uses. Using a method that combined painstaking visual analysis with some data filtering using other imagery layers, the then-St. Petersburg Times (now Tampa Bay Times) determined, conservatively, that 84,000 acres of wetlands had been lost to urbanization—which is remarkable, in the fact that federal and state regulations say that’s not supposed to happen to such an extent.
As part of this story—which was the first of four award-winning investigative stories produced over two years—we published our complete methodology. In it, we published exact results of our assessments of the work I did—warts and all. To wit:
The accuracy of the Times analysis was then formally assessed, using a random set of 385 points—enough for a 95 percent confidence level with a plus or minus 5 percent confidence interval where the distribution of the data is unknown—generated in ArcGIS through Hawth’s Analysis Tools. The points were visually inspected to ensure spatial randomness, and further tested using spatial density analysis. Overall accuracy of the analysis was around 80 percent for both image years. However, wetlands accuracy for both years was around 66 percent, and errors were largely of commission—calling something a wetland that wasn’t. Errors were primarily misclassifications of certain kinds of agriculture—i.e. sugar—and misclassifications of wetlands forest types that share similar spectral characteristics of non-wetlands forest types. Relying on only the Times analysis would be a mistake, based on several factors. Using single date imagery for wetlands change detection has been found many times over to be less accurate than using multi-date imagery, owing partly to seasonal changes in wetlands (Lunetta and Balogh (1999), Ozesmi and Bauer (2001), Reese et. al (2002)). And, the two image years use different sensors—Landsat 5 TM for one, and Landsat 7 ETM+ for another—which creates some sensor-based differences that can’t be extracted or accounted for.
How often do you see a news organization saying that relying on their own analysis alone would be a mistake? Frankly, nine years later, I now wonder if the editors even read this part of the story.
But look at what we did: We formally assessed the accuracy of our work, we found it lacking, and we devised a method to overcome it. Then, we detailed that method, and formally assessed the outcome. And we published it all. In the end, our analysis met the standards for peer-reviewed remote sensing work.
Should every data story do this? No. Not hardly. There’s no time. And not every story is worth assessing this way. But a broader, more systemic step toward this kind of transparency is warranted.
But how?
Let’s Make Some Fake Software
Let’s get back to practical for a moment. What if we were to make an epistemic software layer for news? Like Vox’s cards, except for doubt. Let’s even call it Doubt.js. And because I’m incapable of being this serious for this long, let’s make it tag phrases based on data with animated gifs, because that’s what the internet demands. What would it do?
First and foremost, I think it needs to signal to readers when there’s doubt about something. It may be fine to include a fact, or some data, or someone’s conclusions about something, even if there’s doubt. It may be all we have. Or maybe this story may be for fun—Top Colleges for This Major!—and we don’t really care about the methodology because we’re not taking it seriously. I don’t have a problem with data stories for fun, we just need to let people in on the joke. Here’s this data set, it’s not serious, but what the hell let’s take a look anyway.
So, our system would have a place where we could annotate a data source with the doubts we have about it. And there would be levels of doubt. For fun and the sake of argument, let’s have rock solid, pretty good, danger zone, and lolz this is crap. Add some data-driven fact to the story, add your doubt to our backend, tag the factoid with one of our levels, and Doubt.js will add a little link or icon inline that shows a pop-up card explaining why we tagged it that way. And on those popup cards you’d see something like this:
Rock Solid. Core, commonly used, and thoroughly vetted datasets that may have warts, but are generally the authoritative source on these things. Talking: weather data, jobless claims, certain Census data, economic indicators, etc.
Pretty good. Data that get used a lot, but there are some holes that make them a little troublesome. Specifically thinking about unemployment numbers here and stories about how they’re up and down and what that means. Even economists debate if these numbers really measure what they claim. But, at the same time, it’s what we’ve got and there aren’t better alternatives.
Danger zone. Primarily, this is survey data, especially where the sample size is really small for what they want to measure or the source won’t disclose things like sample size or margins of error. Surveys are incredibly useful, but should be treated with caution, especially if they’re coming from a commercial source. Like, say, this piece, which could have used a whole lot of doubt smeared on it.
Lolz this is crap. This is reserved for those top 13 (or whatever) lists, stories that we pretend have some kind of data backing them up. For example, this train wreck, which only gets worse the more you read the methodology. I have no problem with people writing these things—you’re a liar if you say you’ve never read a top 10 list post somewhere. But I have a problem when publishers take them seriously or pretend they have any validity at all, because readers might just do the same. I’m sure the folks at Rowan University are great people, but to say they are the 63rd best journalism school in the US, ahead of the University of Missouri at 64th is laughable. And if some kid took Mizzou off her top schools list because of this crap, that would be a crime.
Is this the right way to go about this? I have no idea. I’m throwing it out there as a means to start thinking more seriously about how we communicate what we know and how we know it. Just because something is a number or can be made into a neat map or cool chart doesn’t make it Truth. We paint ourselves into a corner when we say that Data = Authority, and the more (and longer) we do it, the smaller that corner gets.
Credits
-
Matt Waite
Matt Waite is a professor of practice in the College of Journalism and Mass Communications at the University of Nebraska-Lincoln and founder of the Drone Journalism Lab. Since he joined the faculty in 2011, he and his students have used drones to report news in six countries on three continents. From 2007-2011, he was a programmer/journalist for the St. Petersburg Times where he developed the Pulitzer Prize-winning website PolitiFact.