Category Archives: Diffusion

Facebook Dives Into the Echo Chamber

Earlier today, several in house researchers at Facebook published a study in Science regarding how much users engage with links that cut against their ideological beliefs. There are already a lot of thoughtful posts on this article, since there’s a lot to chew on. The basic finding isn’t too surprising: people are less likely to “engage” with links that do not correspond to their stated political beliefs. The authors argue there is a three-step process:

  1. We only see links posted by friends and other pages we follow, and they are not a random group. People tend to congregate on Facebook based on their political ideology.
  2. Facebook’s algorithm does not place all possible stories on our “News Feed” when we log in. It favors posts shared, liked and commended on by friends. The authors do not fully disclose how the algorithm works, but they do find it cuts down on how much people see stories that cross ideological boundaries. 5% of stories were screened out for self-identified liberals, and 8% for self-identified conservatives.
  3. Facebook users don’t click on every link. As I’ll discuss later, Facebook users ignore the vast majority of links to political stories. After controlling for things like the position of the link (people are much more likely to click on the first link when they log in), liberals were 6% less likely to click on a link mainly shared by conservatives. conservatives were 17% less likely to click on a link mainly shared by liberals.

This process makes sense for an individual story, but it’s a troubling model for studying months’ worth of Facebook user behavior. As Christian Sandvig points out, Facebook’s algorithm is based on what users engage with. In other words, if I tend to click on all of the fantasy baseball links I see in May, I will be more likely to see fantasy baseball links that people share in June. I’ll probably see some fantasy football links too, even though I want no part of fantasy football.

Separating step 3 from step 2 is problematic, but it appears to be the authors’ main goal in interpreting their results: “We conclusively establish that on average in the context of Facebook, individual choices more than algorithms limit exposure to attitude-challenging content.” To continue with the sports reference, this is where sociologists start throwing penalty flags. The interpretation found in the scholarly journal just happens to be the same argument that Andy Mitchell, the director of news and media partnerships for Facebook, gave when facing criticism last month. (See Jay Rosen’s criticism here.) As I argued weeks ago, Facebook isn’t in a position to get the benefit of the doubt. We’ll get back to the problems of how to interpret the article’s findings in a minute. First, it is important to understand how the group that the authors claim to study and the group they actually study are very different.

As Eszter Hargittai and other sociologists have pointed out, 91 percent of Facebook users were excluded from this study because they did not explicitly disclose their political ideology on their Facebook biography. Users were excluded for providing ideologies that weren’t explicitly liberal or conservative – a user who said their politics “are none of your damn business” would be dropped. It is unclear how self-identified “independents” were treated in this study (none of the posts I have seen mentioned this). My political scientist friends would like me to point out that self-identified independents are often treated as “moderates” when they are actually covert partisans. Users who did not log on at least four times a week were dropped as well.

Once Hargittai added all the exclusions, just under three percent of Facebook users were included in the study. As she argues, the 3% figure is far more important than the 10 million observations:

“Can publications and researchers please stop being mesmerized by large numbers and go back to taking the fundamentals of social science seriously? In related news, I recently published a paper asking “Is Bigger Always Better? Potential Biases of Big Data Derived from Social Network Sites” that I recommend to folks working through and with big data in the social sciences.”

The 3% of Facebook users who are included in the study are probably different from the 97% who are not.At this point, it would probably be helpful to separate the two groups

What Happens for People in the Sample?

One of the hardest things for a scholar to do is publish findings that aren’t surprising. We already know that people tend to have social networks with disproportionately like-minded people. The biggest effect that the Facebook researchers found is homophily. We don’t see a random selection of stories when we log on to Facebook because our friends aren’t a random group of humans. We see stories from people who we are friends with – assuming we haven’t muted those friends because of their postings – and from pages we follow. Most media scholars have found some degree of self-selection, and found it is most prominent online. Neither the study’s authors or critics want to emphasize this point, but the results seem pretty clear in the graphic below (reproduced from the article):

Screen Shot 2015-05-07 at 6.30.17 PM

Critics focus on the role of the algorithm (the “exposed” line in this graphic) versus the role of users choosing to ignore stories. When I first read the term, I thought “users’ choice” included choice of friends (the big drop for “potential from network” in the graphic). Apparently this only refers to whether users choose to click on a story or not (the last line of the graphic). It does not refer to whether users choose to unfriend or block a user because of their political beliefs. Maybe I’m thinking of this differently because I recently talked with someone who chose to unfriend everyone who didn’t share her political views. If we include adding and dropping friends to the big ledger of “user decisions” and Facebook’s friend suggestion algorithm to the big ledger of “algorithmic influence,” it is much easier to see why the authors would argue user behavior is so important, but I may be giving them more credit than they deserve.

The “News Feed” algorithm picks favorites, and we don’t fully know how, which is very troubling. On the other hand, it is only picking from the narrow subset of stories our friends have posted, and that may be a very narrow ideological range. As I wrote weeks ago, Facebook clearly has its thumb on the scale by not showing everything on a user’s “News Feed” when they log in. Facebook’s in-house researchers acknowledge some degree of algorithmic censorship of stories that are mainly shared by the other side instead of the user’s side. The effect is 5% for liberals and 8% for conservatives. This looks like Facebook has its thumb on the scale. However, the weight comes from who we are friends with.

Click-throughs as the End Measure? Really?

The emphasis on people clicking links was surprising to me, because “clickthroughs” are relatively rare. This study only included people who provide their political ideology on their Facebook pages. These users are likely to be more engaged with politics. We would expect them to be more likely to click on political links than other users. But the overall click-through rate reported in this article was only 6.53 percent. As many scholars and writers are finding, social media “engagement” often has a very low correlation with reading the link. In many cases, the low correlation is driven by posts that get a lot of likes and comments, even though people don’t read the story that gets linked to.

Imagine someone linked to a story about Hillary Clinton’s speech where she advocated for more pathways to citizenship for undocumented immigrants. Furthermore, imagine the person sharing the story is a conservative, arguing against Clinton’s “pro-amnesty” position. Other conservatives may rally around the Facebook post, seeing it as an opportunity to voice their complaints about Clinton instead of new information to be consumed to make more informed decisions in the democratic process. This behavior happens on both sides of the aisle. Progressives may post a link about Rand Paul’s avoiding a campaign stop in Baltimore for the same reason.

What About People Excluded from the Study?

 97% of Facebook users were excluded from the study. Some of these users will be just as partisan and ideological as the people who were included in the study; they just declined to put their ideology in their bio page. Other users may be less ideological or less interested in politics. Because most people interested in Facebook’s effect on the news are interested in political news, it is easy to overlook the fact that a lot of people who write online may not be all that interested in politics. (In my dissertation, I found a strong preference for bloggers publishing non-political phrases instead of political phrases during the time period of the 2008 election, but there are critical methodological differences between repeating phrases and showing holistic interest.)

If people do not engage in posting political stories or reading most political links on Facebook, would we expect them to learn anything about politics when they log on? I’m not sure if any research has been published specifically on this question yet, but studies of television “infotainment” suggest the answer is yes. Matthew Baum and Angela Jamison found that people who avoided the news but regularly watched shows like Oprah and David Letterman were better informed about politics than people who avoided the news and Oprah or Letterman. (Full disclosure, I worked as an RA for years on a project with Tim Groeling and Matt Baum.) Watching the news or reading the newspaper provides more information than “soft news,” but soft news can be surprisingly effective in communicating the broad strokes of current events.

Skimming Facebook may also give people the broad strokes of current events. People who have read their Facebook wall in the last two weeks may know there was a riot or uprising in Baltimore, even if they do not regularly watch the news or click on links to news stories. The difference in Facebook is exposure to political information is largely contingent on who your friends are, and your friends are more likely than not to congregate on the same side of the political spectrum. Thus, some people may have heard about the Baltimore riot while others heard about the Baltimore uprising.

Ironically, it is Mitchell, the Facebook executive, who offers the best advice on how to treat Facebook as a potential news source:

“We have to create a great experience for people on Facebook and give them the content they’re interested in. And like I said earlier, Facebook should be a complimentary news source.”

The problem with this is skimming Facebook could make it easy for people to feel like they are getting informed without actually being informed.


#Ferguson on Twitter and Facebook

Following the events unfolding in Ferguson last night was surreal for a number of reasons. First and foremost, we do not expect to see police forces in the United States using tear gas to break up a peaceful protest or imprisoning journalists. We don’t expect to see American police killing a member of the community and then failing to provide an adequate explanation. (Unfortunately, Michael Brown is not the only person killed by police this week. We normally think of security forces killing innocent people and repressing any member of the community who demands an explanation as something that happens in the Middle East, not in the middle of the United States.

Outrage filled my Twitter feed last night, in a way that it had not over the past weekend. I follow a mixture of political journalists, academics and sports writers. It’s a clear sign that an event has crossed over from one group to the broader consciousness of people who follow current events when all three of these groups are tweeting about the same thing. All the eyes of my Twitter feed were on #Ferguson last night, and I was too. However, many of us who followed the story via Twitter were shocked to go on Facebook. No one else seemed to be posting about Ferguson! I can only speak to what I saw on my News Feed, but a lot of people have the same impression. We know that the number of tweets tagged with #Ferguson spiked last night.

https://twitter.com/PatrickRuffini/status/499754709377642496/photo/1

However, we don’t know what Facebook users were doing as an aggregate group. Facebook keeps that information proprietary. All we know is that we did not personally see a large volume of Ferguson posts on our personal Facebook “News Feed,” even though we saw a large volume of tweets about Ferguson.

Why is it worrying that Ferguson didn’t spike on Facebook?

When I started writing this post, I was solely thinking of the question of why news about #Ferguson and last night’s protests would spread differently on Twitter as opposed to Facebook. However, I want to highlight an article Zeynep Tufecki posted on Medium earlier today. Tufecki reminds us that our ability to discuss the events in Ferguson and have some clue about what’s going on there is dependent on having considerable freedoms of net neutrality. Many of us focus on net neutrality as either direct censorship (as seen in other parts of the world) or the idea of having an Internet “fast lane” (which would end up discriminating against a wide swath of ideas and innovative startups who cannot buy their way in to the fast lane.) Tufecki argues the use of various algorithms to determine content is another layer separating users from a truly neutral internet experience:

Algorithmic filtering, as a layer, controls what you see on the Internet. Net neutrality (or lack thereof) will be yet another layer determining this. This will come on top of existing inequalities in attention, coverage and control.

Tufecki relies on a comparison between Twitter and Facebook to explain the concept of “algorithmic filtering” and why it could be particularly threatening in this case. Twitter does not filter individual posts. It provides them all, in chronological order. However, Twitter’s list of what’s “trending” rewards spikes, which would penalize #Ferguson as a trending hashtag. Tufecki points out how this was absurd given what people were tweeting about, but Twitter’s algorithms are not as concerning as Facebook:

“No Ferguson on Facebook last night. I scrolled. Refreshed.

This morning, though, my Facebook feed is also very heavily dominated by discussion of Ferguson. Many of those posts seem to have been written last night, but I didn’t see them then. Overnight, “edgerank” –or whatever Facebook’s filtering algorithm is called now — seems to have bubbled them up, probably as people engaged them more.

But I wonder: what if Ferguson had started to bubble, but there was no Twitter to catch on nationally? Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I cannot be sure.

Would Ferguson be buried in algorithmic censorship?”

Tufecki’s argument could be rejected by looking at our “News Feeds,” but it cannot be definitively “proven” by this method. If we see a large volume of posts on Ferguson, we can probably conclude that Facebook was not censoring them in some way. If we see a low volume of posts on Ferguson at a particular point in time, we cannot definitively say that some form of Facebook censorship is at play. We would be foolish to dismiss the possibility of Facebook censorship, given that Facebook routinely hides around 80 percent of what people post from our “News Feeds.”

Alternatively, if Facebook just wants to attract users and get them to stay engaged by liking or comments on posts, we would expect them to over-emphasize posts on a highly emotional issue like Ferguson. Emotional posts attract more participation. Remember how Facebook experimented on our News Feeds to test this theory? Regardless of whether Facebook looked to specifically alter the frequency at which posts on Ferguson would appear in users News Feeds, it seems foolish to rely on Facebook for sharing information about breaking news. Friends don’t let friends get all their news from an organization that hides 80 percent of posts. We should encourage people to try and rely on as few algorithms as possible when searching for news, particularly when none of us can fully explain how those algorithms work.

So why didn’t #Ferguson show up much on my Facebook feed last night?

As much as we should be concerned about Facebook – even if they don’t have any specific bad intentions here – Facebook’s algorithm may not be the best explanation for why we didn’t see many Facebook posts about Ferguson last night. I cross-posted most of my messages so they would appear on both Twitter and Facebook. I have relatively low overlap between by contacts on the two platforms. I know a lot of my friends who I would interact with face-to-face on a regular basis use Facebook but do not use Twitter. As we might expect, friends on Twitter (or Twitter and Facebook) re-tweeted me before friends on Facebook hit the like or comment button.

On the other hand, most of my cross-posted messages got one or two likes/comments on Facebook last night. This gives me some anecdotal evidence to reject the theory that Facebook was de-emphasizing #Ferguson. Instead, I noticed a different trend. People who interacted with my Facebook posts only did so briefly yesterday, but they were more active this morning. A number of other people who were silent about #Ferguson last night were posting links on Facebook today. While we can’t know for sure how much of a role Facebook’s algorithm played, there is a much simpler explanation for why #Ferguson spiked on Twitter before it showed up much on Facebook.

Twitter is the platform of choice for following breaking news, and the people who primarily use Facebook may also be opting out of following the news closely.

Opting out of following the news closely is easier with today’s media fragmentation, but it is hardly a new concept. Almost 60 years ago, Katz and Lazarsfeld found that most people heard about current events through friends, family and co-workers instead of directly consuming the news. People who watched the news closely and then spread it with others were dubbed “opinion leaders.” For most people, the way they learned about news went something like this:

Screen Shot 2014-08-14 at 8.56.44 PM

My current research focuses on who might count as an “opinion leader” in a digital age. There are two primary traits that separate today’s opinion leadership from what Katz and Lazarsfeld found 60 years ago:

  1. Opinion leadership is often done through computer-mediated communication instead of face-to-face personal communication. If someone has vaguely heard a few things about Ferguson or “the black kid who got shot,” then looks at your social media posting and follows it a bit, then congratulations! You are acting like an opinion leader. Most of us have certain people who we know who tend to act as opinion leaders, particularly in a social media service like Facebook where many users do not focus on current events.
  2. Opinion leadership can be a full time position, independent of original news reporting. While sites like The Huffington Post, Buzzfeed, leading liberal and conservative blogs do a little original reporting, they could rely on copying other reporters for most of their content.

Once you realize that many people do not want to make the commitment to keep up with breaking news, it is no surprise that Facebook would be a bad place to search for people posting about #Ferguson. The people who want to follow these events closely are all on Twitter. In many ways, the more interesting question is when people do decide to copy political ideas and phrases, who do they decide to copy? Are they copying the New York Times? The Huffington Post? Other bloggers? At the risk of shameless self promotion, I am presenting two different works in progress as a part of the American Sociological Association’s Annual Meetings.

  • Tomorrow (Friday) I am presenting at the Media Sociology Pre-Conference, discussing some of the largely self-created barriers that traditional media elites face when trying to spread political ideas, and why bloggers may prefer copying professional opinion leaders instead.
  • On Monday, I will be presenting at the Computational Social Science and Studying Social Behavior panel during the ASA meetings, with what looks to be an outstanding panel. This talk will focus on how the diffusion of phrases specifically related to social identity (such as the prominence of race running throughout any discussion of #Ferguson) spread differently than phrases focused on other aspects of electoral politics or the military.

Google’s Happy Newsroom?

On Wednesday, NPR ran a story on Google’s experimental newsroom, which explicitly avoided publishing anything negative about Brazil’s soccer team in the wake of their shocking 7-1 loss to Germany on Tuesday. Brazilians increasing searched for “shame” and some reference to their country or team. However, Google refused to publish these trends in Brazil. Google copywriter Tessa Hewson is quoted explaining that the trend is rejected for being too negative: “We might try and wait until we can do a slightly more upbeat trend.”

This reaction stunned NPR’s Aarti Shahani, and it would probably stun many veteran reporters. Why avoid negative headlines?

I ask the team why they wouldn’t use a negative headline. Many headlines are negative.

“We’re also quite keen not to rub salt into the wounds,” producer Sam Clohesy says, “and a negative story about Brazil won’t necessarily get a lot of traction in social.”

Mobile marketing expert Rakesh Agrawal, CEO of reDesign mobile, says that’s just generally true. “People on social networks like Twitter and Facebook — they generally tend to share happy thoughts. If my son had an A in math today, I’m going to share that. But if my son got an F in math, that’s generally not something you’re going to see on social media.”

Notice how Agrawal is giving an example of a personal story – his son’s grades – instead of a sports game played by people we do not know personally. This difference between sharing personal stories about our lives and external stories about the world cuts at the question I posed in my last post: what do people find newsworthy? How do we balance personal and external news?

People who gather information for a living often focus on the negative, at least in many areas of social life like politics. The harmonious balance of a well-run community rarely has specific events that make journalists think “wow, that’s a great story!” But when people are sharing stories on social media sites, they tend to emphasize the positive out of what they are exposed to. This sharing vs. journalism model implies that people who produce media content for a living have fundamentally different views on what is newsworthy. Shahani’s article on NPR highlights these differences, and suggests some of why they are so shocking to a professional reporter. However, it appears at though people working in social media may conflate the type of communication (sharing vs. traditional news) with the thing being talked about (interpersonal stories vs. broader social interest).

Is Google’s emphasis on the positive unique to sports and entertainment?

It is interesting to compare the Google Newsroom’s treatment of Brazil’s loss in Brazil to ESPN or Yahoo!, which cover sports for a national audience in the United States. The World Cup is headed to the final. The NBA season concluded weeks ago. 20 years ago, the end of the NBA season would be the best time for a basketball writer to take a vacation. This year, many basketball reporters are realizing the offseason is when they see a dramatic increase in readership, making it one of the two main sports for ESPN to cover right now. Bryan Curtis explained the change at length in a feature earlier this week for ESPN subsidiary Grantland. He starts by interviewing Tim Bontemps, the New York Post reporter who broke the news that Jason Kidd was leaving the Nets.

It was more important than any story Bontemps had written all year. That’s because the trade rumor — shorthand here for any offseason transaction news — has become the dominant form of NBA journalism. “For everybody in my line of work,” Bontemps said, “the offseason has really become bigger than the regular season.”

[snip]

“I can’t comprehend how big this has been,” Bontemps said. “When I got it, I thought it was going to be a big story. I had no idea. I didn’t expect it would be the lead story in sports for three days. It’s been stunning.”

Other NBA reporters tell similar stories. “It’s not just the offseason,” said ESPN’s Marc Stein. “It’s transactions, period. People love transactions.” In an era when reporters can count the number of times their posts get shared on Facebook or retweeted on Twitter, it is very easy to compare the audience’s interest in different topics.

This year, the offseason began on May 18 — the day Roy Hibbert scored 19 points in Game 1 of the Eastern Conference finals. That’s when a “rival executive” told Yahoo’s Adrian Wojnarowski that Minnesota was making noises about trading Kevin Love. Woj — who has mastered the Trade Rumor Era better than anybody — then posted two tweets, holding out Boston and Houston as possible destinations for Love. They were retweeted nearly 2,000 times combined. By comparison, Woj’s column about Hibbert’s Game 1 performance was retweeted 72 times.

While Google’s experimental newsroom focused on positives and negatives for the World Cup match, NBA fans have a different focus based on an orientation to time. Stories about the present or recent past receive relatively little attention. After the Spurs’ offense set records during game 3 in Miami, there was surprisingly little interest in articles explaining how their dynamic offense works. The audience perked up after the finals, when reporters started tweeting even the most speculative information about where LeBron James will play basketball next season.

“The Finals are about the Heat and the Spurs, LeBron James and Tim Duncan,” said Henry Abbott, ESPN.com’s NBA editor. “But LeBron James’s free agency is about everybody’s imagination. Now your team may get LeBron. You can project your dreams onto it.”

Comparing Google’s treatment of soccer to the audience’s interest in the NBA rumor mill illustrated a broader point about sharing behavior and newsworthiness. With many topics outside of our personal experience, envisioning the future may be more important than the present. The possibility of LeBron James leaving Miami for Cleveland may be a positive for Cleveland fans and a negative for Miami fans, but for most people it doesn’t clearly line up with concepts of valence. As a sports fan who is interested in organizations and news media, I find his decision interesting. It makes me think about all the possibilities for the future. But I don’t really care which team he chooses to play for on any personal or emotional level. [Ed: LeBron announced he is returning to Cleveland as I put the finishing touches on this post this morning.]

Separating positivity and negativity from the interpersonal

Instead of saying “people prefer to share positive stories,” it is possible that “people prefer sharing stories involving personal connections, and these stories tend to be positive.” We can think of many reasons why sharing interpersonal, negative content would be rare. Sharing negative feelings or comments about friends on a social media site where our entire clique can see is rude and could divide the group. People generally don’t like to read “whining” posts, an informal norm that is often communicated as a part of new users’ socialization. Users may keep problems to a smaller group of friends, versus a relatively wide broadcast to our entire social network.

When people post about politics and a broad range of social issues, we would not necessarily expect the same normative prohibitions to apply. For instance, as I was starting this blog post last night the athletic equipment company Warrior posted the following on it’s Twitter feed (both posts deleted this morning):

Screengrab of Warrior's initial tweet

Screengrab of Warrior’s initial tweet

Is this an apology?

Screengrab of Warrior’s “clarification”

https://twitter.com/OldHossRadbourn/status/487426242195779584

 

Comparing Warrior’s tweet to the angry replies illustrates some of the differences of when sharing negative emotions is not only acceptable but expected. Warrior shared negative feelings about ESPN, many segments of the sports audience, and women’s participation in sports more broadly. Many people who saw this tweet felt offended. Having covered softball’s College World Series, I’d certainly put myself in that group! We see Title IX as part of a historical trajectory in the United States, encouraging greater female participation in sports (and possibly other areas of social life). Warrior portrayed a different social trajectory. Critics like me pointed our ire at Warrior, a specific offender, more than the sport of lacrosse. That offender is not a part of our social group. Pointing out this company’s offensive tweet and subsequent refusal to immediately apologize is a fundamentally different form of sharing negative emotions.

Instead of simply putting “sharing” activity with positive feelings and “reporting” activity with negative feelings, we need to separate what types of activity people treat as newsworthy on social media. Sharing negative emotions may be more acceptable or even expected when it comes to politics and other social trajectories that may get threatened. It is worth think about whether a preference for sharing a certain emotional valence or a certain set of topics comes first, or whether they can even be separated.

  • People who favor sharing “positive” stories may gravitate away from formal news stories and towards sharing more interpersonal stories. Cat videos and baby pictures can be cute and heartwarming. Politics may provoke anger. Local news may be full of crime and chaos.
  • People who prefer sharing “negative” stories are easiest to theorize as part of a Durkheimian sense of maintaining moral order by highlighting transgressors. For example, people who didn’t tweet that they were happy about seeing softball on ESPN tonight may repeat Warrior’s tweet along with their condemnation. If people want to try and maintain moral order in some way by creating social boundaries and labeling deviants, they would probably look to discuss broader political and social areas instead of policing their own clique of friends.
  • People who favor sharing interpersonal stories will probably gravitate towards sharing positive content. As I discusses earlier, there are many barriers against sharing negative interpersonal content.
  • People who favor sharing political stories and social issues are the hardest to predict. Media organizations and large blogs that cover these topics often skew towards outrage. However, people may want to share more positive messages as a way to build solidarity or encourage some sort of affirmative behavior like donating to a political cause. People may be most inclined to share something about politics or social issues when it includes some kind of social trajectory. These trajectories may be uplifting stories of hope (think of Upworthy), negative stories of “deviants” trying to quash a preferred social history (Warrior and Title IX), or less emotional stories of social change that capture our imagination (LeBron to Cleveland)

Google’s Newsroom appears to be making a counter-intuitive prediction: news and negativity don’t go together all that closely. One possibility is that Google Newsroom is focused on “sharing” as a holistic behavior and not separating by topic. They could be making a huge mistake. The preferences that lead many people to share personal stories on social media may be very different than the preferences that lead people to share stories about social trajectories. Both preferences may be different than Google searches, particularly if people search about different topics than they post about.


Overstating and Understating the Influence of Facebook

What does it take me to fire this blog up again? People going on Twitter to share links to blog posts discussing the ethical ramifications of Facebook’s “emotional manipulation” study. If you have read my prior postings, you may notice that one of my blogging interests is examining the process of persuasion.

Because persuasion and media’s influence on public opinion is a rare topic in sociology, I hope I can use my interdisciplinary background to help explain how the concerns over Facebook’s experiment are simultaneously overblown and understated. Just to warn you now, this is a long post.

Facebook’s potential power and influence come from the use of a “News Feed,” an algorithm designed to filter out at least some postings while highlighting others. The news feed does not delete my posts, but it may hide at least some of my posts from my friends’ pages, without their knowledge. Facebook’s experiment is alarming not because of the results (which were minimal). Facebook raised alarm bells because it showed the corporation is willing to alter its news feed algorithm in order to try and manipulate its users in some way. This raises an important question, which others have not discussed:

Why do we put up with the news feed algorithm in the first place?

If we think of Facebook as a “news” or “media” company, then the existence of a news feed is easier to understand. Every news organization uses some sort of filtering criteria to decide which events will be treated as news and which events are not “newsworthy.” These decisions are inevitable. Newspapers and television broadcasts have limited space for publication. The Internet removes most space limits because server space is cheaper, but there is still limited labor. With limited labor, we can accept that news organizations have to pick and choose which events to feature and which ones not to.

If we are interested in the news, we can have at least some idea of which news organizations will tend to feature or ignore which topics. For example, anyone here in Los Angeles knows local TV stations will drop anything for a car chase. The Los Angeles Times will split its attention between local, state and national issues but won’t emphasize car chases. Websites specialize, so their selection criteria are easy for readers to understand. We may not fully understand how news organizations make decisions – even those of us who study these decisions for a living – but we understand decisions must be made.

Facebook does not have to make the same filtering decisions as media companies media companies that produce their own content. Facebook has the server space to store all of our posts somewhere (along with a scary amount of user information). We do the labor of producing content. Facebook could share everything on our news feeds without using an algorithm. So why not protest Facebook’s News Feed algorithm and demand that everyone gets shown every post, in strict chronological order? Facebook used to give this option, but when I taught social networks last year my students had no idea this ever existed.

Change to a new, non-filtering news feed would probably disorient users more than any of Facebook’s other changes. As news consumers, we are all used to having someone else do some of the tough work of sorting through a wide range of potential stories to pick out some of the highlights. With no algorithm, we would have to do this filtering every time we use Facebook. Most of us probably do some of this filtering in our heads anyway, but we would have to do much more work without a News Feed algorithm, sorting out the “newsworthy” posts from things we don’t care as much about.

Why is “newsworthiness” different on Facebook?

On Scatterplot, Dan Hirschman highlighted the oddity of Facebook calling their algorithm a “News” Feed. What is “newsworthy” is a notoriously difficult question, even for journalists. In my own research, I find journalists often rely on how a news event is planned in advance as a sign of its importance, instead of judging events based solely on what gets said at the event or the logistical difficulty of coverage the event. The question of what is “newsworthy” is even more ambiguous on social media, because different people can use the same site for different purposes. Some people may use Facebook in a similar way to journalists, sharing information about the wider social world. Other people may use Facebook to share pictures of their babies. Baby pictures may not be interesting to a large number of people, but for many of us sharing our personal stories can be the most important “news” of the day.

If Facebook is going to do something to filter which postings are going to appear on our news feed, they have to balance between a wide range of competing interests among users. More than any of us, Facebook’s employees are probably keenly aware of which users employ which standards of newsworthiness. (I assume my profile is sitting on a server somewhere.) As Jenny Davis wisely points out, some of these standards are essentially hard-wired in to the Facebook interface. The presence of a like button and the relative difficulty of expressing certain emotions like sadness will bias Facebook content in favor of positive emotions. However, many aspects of Facebook’s News Feed algorithm are proprietary and constantly shifting. Roughly a year ago, Facebook made major changes to how it treated posts from media companies, as compared to individual’s posts (particularly user-generated meme graphics). Sites like Upworthy surged as they took advantage of Facebook’s new algorithm.

By this point, the fundamental ambiguity of “newsworthiness” plus the proprietary nature of Facebook’s News Feed algorithm leads us to four basic points:

  1. People probably prefer having a News Feed algorithm that filters posts than seeing everything in strict chronological order.
  2. There is no universally “correct” way to program a News Feed algorithm, because people want different things.
  3. Any News Feed algorithm can create winners and losers – a power Facebook wants to keep vs. giving us the keys.
  4. We know less about how Facebook screens out posts than we do with nearly any other media organization.

Are alarm bells going off in your brain? They should!

Facebook could have considerable power by manipulating the News Feed in various ways. What’s worse, the things we know about traditional journalism and how news spreads suggest that most people are more than happy to rely on others to sort through potential news stories and pick out the highlights. Most of the concerns about Facebook have been shared via blogs and Twitter. Now think of your friends who are active or semi-active on Facebook but don’t use Twitter? Are they concerned about Facebook? Mine aren’t. So what can Facebook do with this power, which many users are more than happy to tacitly consent to? Here’s Zeynep Tufekci:

To me, this resignation to online corporate power is a troubling attitude because these large corporations (and governments and political campaigns) now have new tools and stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams. These tools are new, this power is new and evolving. It’s exactly the time to speak up!

That is one of the biggest shifts in power between people and big institutions, perhaps the biggest one yet of 21st century. This shift, in my view, is just as important as the fact that we, the people, can now speak to one another directly and horizontally.

… [Snip 1 paragraph]

I’ve been writing and thinking about this a lot. I identify this model of control as a Gramscian model of social control: one in which we are effectively micro-nudged into “desired behavior” as a means of societal control. Seduction, rather than fear and coercion are the currency, and as such, they are a lot more effective.

Go read the whole article at Medium. No one has a monopoly on relevant knowledge to bring to this discussion. My initial goal in writing this post was to address some of the assumptions of Tufecki’s Gramscian model, along with Hirschman’s questions at Scatterplot about how news and newsworthiness operate. After spending years working with people in political communication, it is clear how little sociological theory has progressed on the issue of persuasion and public opinion as compared to other social scientists. Tufecki’s Gramsian model proposes that individuals are extremely susceptible to large organizations with access to large amount of user data and some access to digital media to utilize that data. As Tufecki argues:

Today, more and more, not only can corporations target you directly, they can model you directly and stealthily. They can figure out answers to questions they have never posed to you, and answers that you do not have any idea they have. Modeling means having answers without making it known you are asking, or having the target know that you know. This is a great information asymmetry, and combined with the behavioral applied science used increasingly by industry, political campaigns and corporations, and the ability to easily conduct random experiments (the A/B test of the said Facebook paper), it is clear that the powerful have increasingly more ways to engineer the public, and this is true for Facebook, this is true for presidential campaigns, this is true for other large actors: big corporations and governments.

How can people resist a large actor with this much power?

Unfortunately, this is where Tufecki’s argument (and many other arguments about the power of Facebook) over-state their claims. The implication of her argument, like Gramsci and a host of other critical theories in communication, is that people have relatively few resources to resist the subtle and corrupting influence of corporate power. However, Tufecki and a wide range of other critics are not simply accepting Facebook’s power. We are not blank slates. We all interest new political information based on our prior held beliefs. Poll people after a debate, and partisans on both sides will tend to say their candidate won the debate. Ask people to look at a news story, and people with strong opinions on any side will tend to feel that the story is biased against them. Ask people what they think of Facebook’s study, and their response will probably be based on their opinions about corporations, the use of power and data privacy.

The largely negative reactions to Facebook’s study show how difficult it is to change people’s minds, particularly when they have already formed their opinions on a topic. Consider one of the more common arguments downplaying the outrage over the Facebook study: “So the meaningful question is not whether people are trying to manipulate your experience and behavior, but whether they’re trying to manipulate you in a way that aligns with or contradicts your own best interests.” Yarkoni goes on to argue that tech companies need regular experimentation to improve their user experiences, and these could be real benefits. I can see both sides of the argument. Having some version of a News Feed algorithm probably does align with most users’ best interests, insofar as users want to avoid the cognitive effort of sorting through feeds to find the postings they care about most. On the other hand, years of reporting experience have taught me that just about everyone who produces information for public consumption is trying to manipulate you in some way. That’s what makes us angry and unwilling to trust!

For all that manipulation of information could be subtle, people are very sensitive to new information that does not fit their prior held beliefs. We aren’t cultural dopes. The number of websites for someone to post reactions and critiques has grown exponentially. Facebook sharing plays an increasingly important role in spreading news stories and blog posts, but it does not have a monopoly on social media sharing or traditional media content (at the moment). As a result, there are ways for critiques of Facebook to spread to a large segment of the audience. Any direct or overt censorship of particular causes is very risky, because audiences have the ability to find the censorship and tell others about it, like this blog post and the others I have linked to.

“Facebook Manipulates Users’ Emotions” is a great headline that prompts people to think of a lot of nightmare scenarios. However, the emphasis on stealthy, subtle emotional manipulation makes it hard for people to understand the most powerful and plausible effect of Facebook’s News Feed algorithm: the ability to influence which topics we think are worthy of debate.

What is “agenda setting?”

Convincing people to think a particular way about a particular issue can be extremely difficult, but there are other things Facebook could do to influence public opinion. Several people have already discussed a different Facebook study, where giving people information about where to vote and showing other friends had already logged in to say they voted led to an additional 340,000 votes being cast in 2010. Selectively posting these messages in particular parts of the United States could influence elections, but they would probably cause irrecoverable harm to Facebook’s reputation with users.

The easiest way to influence people is to convince them that a particular issue is important. When people see a particular topic repeatedly in the news, they are more likely to think that it is important. This is known as “agenda setting.” Setting the agenda is a limited influence. Getting you to think about something isn’t as powerful as getting you to agree with me on a particular policy. However, campaigns know agenda setting is important and they try to define the agenda by emphasizing particular issues. Since they often rely on news organizations for publicity, they often fight with news organizations for control of the agenda. As people increasingly share political ideas online, it is unclear whether people respond to the agenda they see on social media in the same way as traditional media. However, Facebook’s News Feed could have tremendous power to set the agenda, as I describe with the three following hypothetical initiatives:

  1. Facebook brings the world closer together. Imagine Facebook wanted to try a World Cup themed branding exercise by saying it wanted to make it easier for users to follow what is going on across the world. This could be presented a public service as well (albeit a public service many users wouldn’t want). If everyone’s news feed is altered to emphasize various kinds of international stories, then foreign policy could become a more salient issue. Americans currently prefer Republican positions on foreign policy. Therefore, an increase in international coverage in people’s news feed could hurt Democrats in the fall. Alternatively, an increase in international coverage could highlight the humanitarian crisis in Central America and the sudden influx of undocumented children migrating north. Increased attention would make migration a more salient issue for voters. Given Facebook’s demographics, users are more likely to favor Democrats’ proposals for migration reform and be frustrated with Republicans’ lack of movement on the issue.
  2. Facebook connects people to non-partisan guides to candidates’ recent accomplishments. Assume Facebook wanted to make sure every user was better informed by connecting them to a highly reputable non-partisan guide listing what candidates have done in prior elected office. This could be great PR for Facebook, particularly if the company wants to alleviate concerns that they have the power to swing elections. But this service would also swing elections. Besides reminding a disproportionately Democratic voting base about the election, a guide about candidate’s recent accomplishments could suggest that the goal of government is to do things. The guide may remind people about the Republican party’s reputation as the party of no, strengthening some users’ dislike of the Republicans. (This is “priming” more than “agenda setting.”)
  3. Facebook cares how you feel about the issues. Since Facebook knows users dislike non-emotional posts, they could skew the news feed to emphasize people adding emotional reactions to stories they share as opposed to direct, non emotional storytelling. This would change the news feed to increase the weight of friends sharing stories, particularly if they add comments which include a specified set of emotional terms. A mechanical counting algorithm wouldn’t get in people’s heads as they write on Facebook, but it’s good enough. This algorithm would increase the salience of issues that tend to provoke emotional responses, such as race, gender and religion. Other issues would be less likely to appear in our news feeds.

In many ways, the third hypothetical may be close to what Facebook has been doing for the past year. Facebook’s algorithms favor sites like Upworthy, which emphasize emotional levers in click-bait headlines. They also give a subset of users what they appear to want: a place to share emotional reactions about political events and receive validation from friends. Even without Facebook, some of my ongoing research suggests bloggers may prefer copying phrases from new media sites that surround the phrase with emotional reactions to traditional media elites that couch phrases in the rituals of objectivity.

In 2008, John McCain’s campaign manager Rick Davis said “this campaign is not about the issues.” Washington insiders were stunned by the “gaffe.” Thanks to Facebook’s News Feed and what many users prefer as political news, Davis may have been a prophet, anticipating a world where social media helps users set the agenda for elections based on their emotional responses to identity politics.


Reaching out with Rainbows

 

Los Angeles mayor Eric Garcetti shared the following image on his public Facebook page today:

LArainbow-Garcetti

Beneath the image, Garcetti added a message to his constituents: “Rain is forecast for Los Angeles this week. In order to conserve water, we encourage Angelenos to turn off their outdoor sprinklers for the remainder of the week and for a few days after the rain. Please SHARE this news with your friends and neighbors.”

As of this posting, Garcetti’s message has twice as many Facebook shares as likes. It’s a rare pattern for Facebook, where most posts get more likes. It looks like people aren’t just appreciating the photo or showing their thanks for rain amid a serious drought. People are also sharing the underlying civic message.

Of course, whether or not people follow through on this message remains to be seen. Many Angelenos live in corporate managed apartment buildings, so we don’t have the power to follow through on this message. But we can share the message. It is spreading more widely than most of Garcetti’s posts. Seems like there is a lesson here for how elected leaders can use social media to promote civic responsibility.

 


Upworthy and Going Viral

Over the last week, both Ezra Klein of the Washington Post and Robinson Meyer of the Atlantic commented on the explosive growth of Upworthy over the last few months. Most people in cultural industries work under the maxim that “All hits are flukes,” and it is almost impossible to predict viral hits in advance. However, Upworthy outlined a strategy to try and ensure virality in recent blog post, while also showing 87 million unique viewers visited their site in November. (They don’t offer any comparisons, but Meyer makes the safe assumption that it’s “a lot more” viewers than the New York Times.)

Whenever people discuss Upworthy’s content, they jump to the site’s distinctive headlines. To sample from today’s headlines:

“First These Women Were Offended. Then They Realized Who Was Being Offensive.”

“Watch: Celebrities Wasting Money To Make The World a Better Place”

“Here’s What One Woman Thinks Is The Worst Thing We Do To Girls. Many Women Will Agree With Her.”

Critics like Meyer point to the headlines as clickbait, striking an ideal emotional tone to maximize readers’ responses. More than other media organizations, Upworthy emphasizes an emotional connection with readers, as they explain in their recent memo: “for us, headlines are an important means to an even more important end: drawing massive amounts of attention to topics that really matter like health care costs and marriage equality and global health. And good news: It’s working. Last month, 87 million people visited Upworthy for videos about racial profiling, gender bias, reproductive rights.” (Emphasis in original)

At the same time, Klein points to the minuscule volume of content Upworthy publishes in a month as a possible reason for success. Their 250 postings in November, according to NewsWhip.com, is a fraction of what we see on the Huffington Post (over 18,000), Buzzfeed (over 3,000) or Gawker (1,000 posts). At the same time, the average number of Facebook likes per Upworthy post dwarfs any competitor:

upworthy-800x699

Upworthy itself acknowledges that clickbait is a viable strategy to get page views, but their staff tries to distinguish themselves by saying it is the “actual quality of the content” that drives people to click the “Share” button on Facebook. What constitutes “quality” content is open to wide interpretation, and media organizations often try to explain their success through providing “quality content.” However, Upworthy’s low volume of content suggests an emphasis on maximizing the value of individual postings instead of maximizing the volume of postings and hoping a certain percentage become hits:

“Our top curators comb through hundreds of videos and graphics a week, looking for the 5-7 that they’re confident are super-shareable. That’s not a typo: We pay people full-time to curate 5-7 things a week.”

Klein buys in to this logic, suggesting that posting a large volume of content may go beyond diminishing returns and actually be counter-productive. After all, if you could identify which stories will spread widely and which do not, it makes economic sense to ignore stories that will not spread widely. Scholars of cultural industries should recognize this problem. In movies, music, books, television and other cultural industries, only 10 to 20 percent of all works turn a profit. If all movie tickets have a similar cost, why pay for a terrible movie when you could watch a great one instead? News and information online have similar issues. If every story and posting is a click away, why not spend your time on the best content? After all, it’s impossible to read everything on the Internet.

The problem with most “just invest in the most shareable content” strategies in cultural industries is that people have free will, so it is impossible to fully predict what will be a hit. Hollywood is full of big budget movies that studios were convinced would make millions of dollars, only to fall flat. In my days as a reporter, I was always surprised when readers flocked to stories I wasn’t particularly fond of, but ignored what I thought was my best work. Upworthy’s low volume, high sharing strategy seems to violate the economic rules of cultural industries. How do they do it?

Both Klein and Meyer point to Facebook’s algorithms for putting media organization’s content on a user’s Facebook news feed, but this only tells part of the story. Yes, Upworthy’s low volume of posts helps them promote every posting on Facebook without being penalized. Yes, changes in Facebook’s algorithms could help more professionalized media organizations. However, NewsWhip’s metrics suggest another distinctive part of Upworthy’s strategy: their social media presence is almost exclusively Facebook, with little traffic coming from other social media:

newswhipnov13

Source: http://blog.newswhip.com/index.php/2013/12/top-social-publishers-november-2013

While Huffington Post and Buzzfeed are widely repeated on both Facebook and Twitter, Upworthy is rarely mentioned on Twitter, suggesting a broader lesson about how Upworthy fits a distinctive niche in the chronology of how news spreads today. Every media organization acts as a gatekeeper, listening to a wide range of sources seeking media attention and deciding who is worthy of coverage. Upworthy acts more like a “surrogate consumer” (Hirsch 1972), sifting through hundreds of potential stories like radio stations sift through hundreds of potential songs. Given the pace at which modern media organizations post, Upworthy would be one of the last movers. This explains why Upworthy gets so little traffic via Twitter, since Twitter users tend to be earlier adopters and sharers of news.

I propose that Upworthy’s headlines, emphasis on a small number of postings and reliance on Facebook sharing are all part of a broad strategy to target a niche audience that wants to feel informed – particularly with regards to a subset of progressive social issues – even though this target niche is not the first to find and repost news. Clickbait headlines that try to convey a more inspirational and positive form of group solidarity separate Upworthy from competitors like Gawker and partisan blogs, which may be particularly attracting to audiences that want to avoid the negativity of new media content but do not want to feel left out. Facebook is an ideal social networking site to find this audience, since it is not the site of choice for breaking news and up-to-the-minute interaction. Upworthy’s less timely postings should matter less. All of these pieces fit together, making Upworthy increasingly popular among it’s target audience niche. However, their approach is dependent on other media organizations and quickly turns off readers outside the target audience, suggesting a limit to how much other media organizations could benefit in the long term from copying Upworthy’s approach.