Tag Archives: Categorical Variables

Media Bias, Trump Voters, and Apples vs. Oranges

When I was in graduate school, I think the most difficult concept for any scholar to measure was media bias. How do we know one story is biased while another one is not? Bias is a great and easy to use concept if we are talking about me shooting a bow and arrow as part of summer camp as a kid. There’s an objective target out there. If my shot keeps missing two inches to the left of where I aim, I can try to correct the physical errors in my shot by aiming two inches to the right of the bull’s-eye.

Objective journalists strive for the bull’s-eye every time too. However, there is no stable, fixed target for journalists to shoot at. Writing news stories involves a lot of judgment calls. Is this source credible? How do I portray them? Reasonable people can disagree about where the target should be. This means bias is an inherently relative concept when we are talking about the media. It’s based on our frame of reference. As I wrote years ago, audiences may want something different than news organizations.

Measuring media bias is awfully difficult because the target isn’t fixed. We have to choose a reasonable target – some standard of objective journalism – before we could measure how far off actual news content is. Choosing the target is almost inevitably a value-laden judgment. Even “objective” targets lead to value-laden interpretations of the results. For example, let’s say we want to compare news organizations to members of Congress. It takes a lot of work to going through the Congressional Record to build the target. All the work may not be worth it. Congress isn’t a truly “objective” target because it has a majority party. A centrist newspaper would be to the left of Congress today, even if it is closer to the average ideology of Americans.

Since looking for “objective” targets to measure media bias can be a lot of work for little reward, that’s not how most scholars have tried to study journalistic bias. It’s much easier to look at two different media organizations and measure their content relative to each other. Another favorite strategy is to choose two similar news events, then study how the apple received different news coverage than the orange. These strategies are easy to do. They don’t require special math skills for the audience to follow the argument.

Unfortunately, the “compare apples and oranges” style of study distorts our view of the world. Imagine comparing how Vox Media and Breitbart covered the recent scandals facing the Trump administration. One media organization is on the left and the other is on the right, so obviously they will cover the scandal differently. That doesn’t mean either organization makes for a good bull’s-eye. Both media organizations built their brand on ideological takes; neither is trying to be in the center!

Scholars are human, so we will probably have more sympathy towards either the apple or the orange in an “apples and oranges” study of media bias. If you read one of these studies, you will notice that they look for cases where the differences are relatively large. If the apple and the orange receive similar news coverage, no one cares. Apples and oranges studies are good at describing differences in coverage in relative detail. However, these studies lack an anchor. The only way to say that the apple is biased while the orange isn’t is if we really, really like oranges. (I like trolling with the apples and oranges metaphor because I prefer kiwi!)

***

Whenever I read a pundit or an academic study comparing the attitudes of Trump voters to the attitudes of Clinton voters, I can’t help but see the same problem. Of course there will be some differences between Trump and Clinton voters. For example, a lot of people have focused on the large difference between Trump and Clinton voters on race. Trump announced his candidacy by disparaging Mexican immigrants as rapists and undesirables. It’s easy to see Trump, see a gap in racial attitudes between voters, and assume this is a story of increasingly racist Republican voters. On the other hand, Trump’s explicit racism and the Black Lives Matter movement may have combined to make White Democrats more progressive on race than they were four years ago.

When we separate everyone in to two categories, all we can do is see the difference between those categories. We can see race polarized voters. However, it’s hard to know just how much racism pushed voters in to Trump’s camp, as opposed to anti-racism pushing other voters to Clinton, just by making a comparison.

The easiest way to write about statistical models actually makes this problem harder. To try and explain this, let’s create a quick and dirty regression model from the American National Election Study. I used age, race (white/nonwhite), gender and having a college degree as control variables. (The publicly available version of the ANES doesn’t have income yet, so take all the results here with a grain of salt. I’m not looking for an optimal model, just a teaching tool.)

Social scientists mainly look at racism via a scale of “symbolic racism.” These are four questions that social scientists started using in the 1990s after it seemed clear that people would not truthfully answer more direct questions about whether Blacks should be discriminated against. The symbolic racism scale comes from a time when social scientists almost exclusively focused on Black-White differences. I will get to Trump’s explicit racism against Mexican-Americans in a separate post. Let’s start with the well-established symbolic racism scale. All these questions are on a 5 point agree to disagree scale:

  • Irish, Italians, Jewish and many other minorities overcame prejudice and worked their way up. Blacks should do the same without any special favors.
  • Generations of slavery and discrimination have created conditions that make it difficult for blacks to work their way out of the lower class.
  • Over the past few years, blacks have gotten less than they deserve.
  • It’s really a matter of some people not trying hard enough, if blacks would only try harder they could be just as well off as whites.

We put these items in a scale, since the average response to four related questions is probably more reliable than a single question. The scale still ranges 1-5. I re-ordered responses so low values are respondents who feel additional steps are necessary to end the systematic disadvantages that African Americans face. High values represent racial resentment. To try and illustrate the strength of symbolic racism, I added a respondent’s political ideology on a 1-7 scale (1=strong liberal, 7=strong conservative). I am using a logistic regression model, where positive coefficients represent groups who voted Trump and negative coefficients represent voting for someone else (not necessarily Clinton). Non-voters are dropped from the sample.

Screen Shot 2017-05-21 at 12.27.32 AM

Screen Shot 2017-05-21 at 12.27.41 AM

If we were interpreting this model quickly, we’d look and see that the coefficient for symbolic racism is relatively large and statistically significant at the .001 level. As a voter’s symbolic racism goes up, the likelihood they voted for Trump goes up by a fairly considerable margin. Since the symbolic racism scale goes from 1 to 5, the difference isn’t as big as the difference between strong liberals and strong conservatives. Nonwhites were considerably less likely to vote Trump, and we could talk about other control variables.

At this point it’s very easy to point a finger at Trump voters and accuse them of being racists. The average Trump voter scored 3.92 on the 1-5 scale for symbolic racism. It’s an open and shut case, right? More racism causes more voting for Trump.

When I ran my regression model this quick and dirty way, I set a very unusual bull’s-eye for comparison. Strong liberals are the baseline for political ideology, because they have the lowest numerical value. However, only 3.6 percent of the respondents identified as strong liberals. Because symbolic racism is coded 1 to 5, the minimum value on the symbolic racism scale is the bull’s-eye for this regression model. It looks like symbolic racism causes an increase in voting for Trump because the value can only go up. Let’s see what happens if we reverse the symbolic racism scale to create an anti-racism index, then use it in the regression:

Screen Shot 2017-05-21 at 12.42.29 AM

All the values in this regression are identical, but the coefficient for the anti-racism index is negative. In other words, as people disagree with the statements in the symbolic racism scale, they are less likely to vote for Trump. The constant changed considerably too. You may have noticed the very low constant in the first regression model. Since logit is a multiplicative model, that low constant actually matters a great deal. However, it’s still very easy to overlook if we are skimming results. I often do!

The bull’s-eye in this model is weird. I’m comparing everyone to a hypothetical strong liberal with the maximum score on the symbolic racism index. That combination probably doesn’t exist in the real world, but that’s not the point. This model exists to show that we could just as easily make a regression model showing opposition to racist assumptions made someone less likely to vote for Trump.

Regression coefficients tell us that there’s a large difference in vote preference between people who scored high and people who scored low on the symbolic racism index. However, regression coefficients can’t tell us which end of the scale is more meaningful. When we compare groups of people, regression models don’t tell us what the bull’s eye should be. We build assumption in to the model, and these assumptions shade how we initially interpret results.

Thankfully, the American National Election Study has asked these symbolic racism questions on a fairly regular basis since 1992. We don’t need to make an assumption about whether symbolic racism surged upwards among Republicans in 2016. We can just check the data. In the interests of space, I will present changes in the index instead of going through each of the four questions. Note that this graph measures party identification not who someone voted for. It includes non-voters. “Independents” who lean towards a party tend to act like partisans, so they are included as partisans here.

sracism-timeline

Every group reported somewhat lower symbolic racism in 2016 as opposed to 2012’s high water mark. This doesn’t really seem to fit with Trump, his base, or the regression results from earlier. One possibility is that racists are learning “politically correct” ways to answer these questions. This happened with more explicit questions about racism. However, there is little evidence that Trump’s base has tried to hide or give safer socially desirable answers in public opinion polls.

The biggest change in symbolic racism scores for 2016 is White Democrats’ large decrease, moving to nearly the same level as non-white Democrats. The symbolic racism score for White Democrats was relatively unchanged from 1992 to 2012. Trump’s campaign may have polarized voters more than it pulled them towards racism. Explicit racists got more of a mainstream voice in the Republican nominee. At the same time, whites who did not want to endorse racial resentment may have moved to more explicit anti-racist positions.

We see such a large difference in symbolic racism scores between Republicans and Democrats in 2016 largely because White Democrats scores’ dropped so dramatically. The regression model that says an increase in symbolic racism made it more likely someone would vote for Trump is technically correct. But that’s not the best way to interpret the data, since the average Republican actually had a slight decrease in their symbolic racism score too. People with the most racial resentment were probably voting for any Republican. The real story in the ANES appears to be a group of white voters moving away from racial resentment, and then voting against Trump because of his explicit racism.

The “secret sauce” to quantitative research is being able to make good arguments and inferences from the data we have. It’s not that hard to build a quick and dirty stats model just by relying on our assumptions of how variables “should” work. Imagine that model fits our assumptions perfectly, like the symbolic racism-Trump voter model. It’s awfully tempting not to dig deeper. Apples to oranges comparisons play in to some of our worst cognitive biases, because it’s so easy to treat one group as the default and the other as the weird thing that must be explained. The tribalism and hyper-partisanship of today’s politics only makes this bias worse.

I spent so much time thinking about apples to oranges comparisons of media content when people were searching for evidence of “bias” that I’ve always been a little more skeptical of the logic behind these comparisons. It’s not a skill that I developed specifically to look at partisan politics. It’s not a skill that I developed from years of stats classes either. The ability not to rubber stamp apples to oranges comparisons that fit our preconceptions is something different. In some ways its harder. It takes courage to acknowledge our preconceptions may be wrong.

In other ways, the ability to question our biases is easier to learn. We don’t need special classes in stats to develop this skill. Every year I taught stats there were several students who walked in to the class having mastered it. When I was a teaching assistant, I think the class that actually focused the most on this skill was a freshman level introduction to linguistic anthropology course. We were going over differences between how men and women communicate. We didn’t just cover differences and debate nature vs. nurture. We spent more time going over whether there are certain stereotypical differences that we expect to see, so those are the only differences we remember. Naive assumptions can become self-fulfilling prophecies unless we consciously try to check them. The good news is we can all learn to check them.