Three Dimensional Election Coverage

It shouldn’t be a surprise that my postings have dropped off recently, given the demands of my work schedule, but I wanted to write a special pre-election post to discuss a few trends in people writing about campaigns.

A recent article by Thomas Gilbert and Andrew Loveridge questions the increasing trend to try and predict the upcoming election. While quantitative analysts like Nate Silver may make better predictions than other pundits, Gilbert and Loveridge argue the very act of making predictions is the wrong goal for journalism:

“The real problem with our media wasn’t that it was bad at predicting elections (although it was)—it’s that it spends so much time on predicting elections at all, as opposed to moderating and shaping a national debate on what is at stake at the ballot box. Statisticians like Silver have helped eliminate bias when it comes to election prognostication, but there hasn’t been a similar commitment to eliminating the bias of spurious political narratives peddled by major media outlets. This leaves data journalism in the unfortunate position of helping to predict our electoral choices without evaluating their significance and pointing to alternatives.” (emphasis added)

This criticism of Silver is not new or surprising. Silver himself told New York Magazine “we’re not trying to do advocacy here. We’re trying to just do analysis. We’re not trying to sway public opinion on anything except trying to make them more numerate.” When Silver reintroduced FiveThirtyEight under ESPN, he provided the following template for mapping different forms of journalism:


You will notice that “advocacy” is not a part of Silver’s 2×2 table. It is out of bounds, completely separate from journalism. Gilbert and Loveridge argue that this is a problem with “big data.” While “big data” is an increasingly common boogeyman, it is not the quantitative approach of data journalism that causes a problem. Traditional punditry isn’t their answer. Neither is ethnography or some other rigorous qualitative approach to studying voter behavior. Instead, Gilbert and Loveridge are part of a long line of scholars who argue against the goals of most American political journalism:

“Nate Silver should not be lumped together with Bill O’Reilly or Glenn Beck as an enemy of civic engagement; he lives and operates in a social reality very close to our own. But he does have one thing in common with them: persuading people into perceiving politics through the aesthetic coherence of his models at the expense of their own political imaginations. This is the danger inherent in Big Data qua ideology, rather than a tool in the service of inquiry.”

I would argue that Nate Silver’s 2×2 table needs a third axis: analytic vs. advocacy. Silver and many other data-intensive analytic approaches are dedicated first and foremost to trying to understand what people do and why they do it. Prediction is on an extreme end of the analytic spectrum, because it assumes prior behavior will help us to understand future behavior to some degree. But there is another extreme to data journalism and many forms of academic scholarship: it is non-judgmental. Nate Silver has political preferences, but he tries to keep them out of his analyses.

The advocacy axis has grown by leaps and bounds in an era of digital publication. There are plenty of websites you could go to that will outline the stakes of an election, who to vote for, and the dire consequences if you even think about voting the wrong way. Some of these sites provide valuable inquiry and allow for a broader range of political opinions than we would see in any form of journalism, whether it is data journalism or traditional punditry. Of course, there are a large number of problems with these sites, particularly if people rely on them as their main source of information.

Big data may be new, but picking on journalists for being too far on the “analytic/descriptive” axis instead of the “advocacy/take a stand” axis is not. Progressive academics have critiqued mainstream media for being “empty” or “inadequate” without some form of advocacy since the Vietnam War, when critical scholars wanted news organizations to take an active anti-war stance instead of describing policymakers’ positions on war. Gilbert and Loveridge’s ideal would seem to be a combination of advocacy, rigor and either quantitative or mixed methods. It’s certainly an intriguing box. It seems preferable to other advocacy-based political writing, which is ad hoc and full of ad hominem attacks.

Unfortunately, the box seems impossible to fill. Trying to explain what people do and why they do it is the main goal for an analytic writer. Whether the writing is quantitative or qualitative, descriptive or causal, the goal is to understand what others are doing instead of judging those decisions. In most forms of advocacy writing, the goal is ultimately to sway people to do something else. These pieces can have a comparative advantage in introducing new ideas to a political debate, something that more analytic political writing is particularly bad at. More thoughtful forms of political advocacy that do not quickly devolve in to tribal allegiances could be a tremendously valuable part of political participation. However, it doesn’t make sense to try and push data journalism (or social scientific big data) into this advocacy box, because the goals and strengths do not overlap.


About Noah Grand

PhD in Sociology. I use statistics to predict news coverage. And home runs. View all posts by Noah Grand

2 responses to “Three Dimensional Election Coverage

  • Tom

    Hi Noah,
    Thanks for providing such a rich and useful response to our piece. I’ve already posted my own response over in our article’s comment section, but am copying it below as I would like to hear your thoughts: “I wholeheartedly agree that the absence of an “analytic/advocacy” axis is the heart of our critique of Silver’s approach to data journalism. But our critique is much deeper than simply suggesting Silver and others should be farther towards the advocacy end of the scale. Rather, it is because the underlying assumptions of the data itself are taken for granted that Silver ends up effectively supporting and propping up the horserace compulsions and ideological frames of our wider news media culture. Take for example the front-page Fox News story earlier today about the discrepancy between pre-election polls and the final results: and compare this with the 538 analysis: Note that both pieces, despite a slight difference in tone, actually support the same narrative frame for interpreting the election: 1) the biggest surprise of the night was the bias in polls in favor of Democrats; 2) the solution is to get polling firms and analysts more reflective in their assumptions of how to measure public opinion. The Fox News story even cites the 538 models, and goes on to quote Rasmussen’s seat-of-the-pants speculation that Republican candidates won the last-minute swing vote as well as Larry Sabato’s belief that a government inquiry into polling practices wouldn’t work because “it would become partisan, inevitably”. I hope this makes clear the strong continuity between Silver’s “journalism” and the ability of media outlets like Fox to treat statistical “analysis” as objective and informative rather than what it more often is: a cover for their own ideological agendas. I think Silver did a demonstrably better job of this during the 2008 and 2012 cycles, when his incisive critiques of Rasmussen and other subpar pollsters qualified as good journalism precisely because his analytics were *themselves* a form of advocacy. That is real journalism, and it’s what we want to see more of.”
    I don’t see this as an “academic” critique of data journalism; I think our suggestions are both realizable and desirable.

    • Noah

      Hi Tom,

      It’s great to continue the discussion. It looks like things are getting cross-posted on both of our sites…I guess I’ll cross-post?

      I think there are two separate issues here. The first is “what is the current state of the field?” I think a few of Alexander Howard’s comments on your article frame the issue very well: “The article suggests that “data journalism” isn’t doing something. Data journalism is created by humans, or by the software humans create: perhaps your critique should be levied at specific authors, not the form itself. Much of the best data journalism of the past decade does make the world comprehensible.” “The tragedy for me, here, is that what Nate Silver did by adopting the term “data journalism” for the kind of predictive work he produced poisoned the well for what data journalism was prior to his entrance in the space. By perpetuating that usage, you are not only validating that pre-emption but using that particular approach as a strawman to knock down the field as a whole, which is full of examples of precisely the thing that you want.”

      My post is only comparing your article and my read of what you wanted out of “data journalism” to Nate Silver’s proclaimed goals. If neither of us talked about other forms of data journalism, this is a pretty good indication that Nate Silver has co-opted the term pretty effectively! I think we both agree that big data is a tool that could be used for many things. Silver is building a better mousetrap, but he’s not using data to answer questions that cannot effectively be asked without data. It feels like we are all saying there are other questions and approaches that we agree are more interesting. How do polls handle likely voters in a midterm may be more interesting, because it produces new knowledge. Please let me know if I am reading your argument incorrectly.

      I also tried to push a separation between description and explanation vs. advocacy, drawing off of your idea (and others’) argument that big data is an ideology. As you wrote, “Data journalism should be telling us what’s at stake, emphasizing our agency, and empowering our sense of political possibility.” These passages reminded me of another long-standing trend in progressive academics’ critique of mainstream journalism as an ideology with de facto conservative biases. I don’t mean that the critique is “merely” academic, because it has important implications for the audience. If I am misreading your intent at some point, please let me know

      I don’t see big data as the best way to answer questions like whether “sending troops to Liberia is the most effective course of action amongst unquestioned alternatives?” because any statistical model about Liberia would require huge assumptions that it may be a joke. Good journalism should lead to better decisions, but this assumes people make decisions based on logic. I think there are limits to how many people want a technocratic answer to policy questions, based on big data, even if we assume those technocratic answers exist.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: