Given Facebook’s growth as a provider of news, it is not altogether surprising that a May 3 article on Gizmodo about the alleged suppression of articles from conservative-leaning sources has sparked significant public debate. Gizmodo’s reporting details a grim view of journalists at the social media giant. But who are the news curators at Facebook?
Curiously, the Gizmodo article pointed readers to a LinkedIn listing of Facebook employees with the title of “curator”. We’re in the midst of a large project, Newsroom21, examining employment trajectories of employees in modern newsrooms. This seemed like the perfect opportunity to use our methodology to add context to this story.
Our results show most curators are trained journalists with diverse backgrounds; most have prior experience working in digital media, and most are accustomed to the work of freelance or contract labor. From our perspective, the reality is that social media organizations are increasingly the spaces where audiences are finding news, and therefore, it’s not altogether surprising that they’re appropriating employees from “traditional” news outlets. It is, however, somewhat surprising that there is such a high concentration of employees with print news backgrounds, as compared to television, digital media, magazines or other news sources.
Traditional organizational research shows that when companies enter into a new market, they will often hire employees with related skills in order to improve competitiveness. Nevertheless, the strong presence of print journalism compared to other industries is somewhat surprising and merits further exploration. On the other hand, it’s also not surprising the exact nature of these job roles is in flux as organizations such as Facebook determine the best way to incorporate a news function into their products.
The figure below shows a network map of the employment histories of the 18 news curators listed publicly on LinkedIn. For this analysis, we’ve de-identified the data and generalized for the purposes of discussion. In order to create the first visualization, we looked at the prior employers listed by Facebook news curators, and we categorized those employers into nine different categories in addition to Facebook.
The thickness of a connecting line and the intensity of the color indicates that more employees moved between those two industries.
At a high level, this suggests that although traditional journalistic skills are desirable, preliminary data suggests a prioritization of “digital business” savvy – as evidenced by the presence of marketing skills or digital media skills. It is interesting to note that in many cases Facebook was “appropriating” employees from traditional industries – the third and fourth most commonly listed industries were magazines and print newspapers. On the other hand, the first and second most popular commonly listed industries were digital media and marketing / public relations.
Furthermore, none of these news curators came from other technology companies; point to the need for experience with traditional news norms in this particular job role
Looking at the work histories, however, provides a more nuanced perspective. The above illustration merges the prior job transitions for all employees who listed news curator at Facebook as a current position. By focusing on the 18 distinctive work histories, there appears to be an emphasis on the marriage of traditional journalism with digital business skills. Almost every curator had experience working with Web content; those who worked in marketing and public relations generally worked in an editorial or production-based job role. Of course, additional research is needed in order to be able to substantiate this type of directional finding, and this is exactly what we’ve set out to do with the Newsroom21 project.
As was reported, most of the curators at Facebook are contractors; in other words, they are on short-term contracts in positions that do not offer set benefits such as healthcare. In turn, most curators work at least 2 jobs, stitching together an income with freelance work. On average, current curators had spent a little more than a year at Facebook (1.2 years), while overall they averaged 1.7 years in a job role, although this number is skewed by a number of long-term freelance jobs.]]>
In this blog post, we survey some of the literature that has addressed how journalists and citizens have used chat apps during political unrest. In our upcoming report, “Meeting in Digital Spaces: News Organizations Uses of Chat Apps during Political Unrest“, we define political unrest as widespread dissatisfaction with a government, manifesting itself in organized protests with different levels of intensity and scale.
Much of the scarce literature on journalistic uses of chat apps focuses on advantages of using such technology. This can involve sourcing through WhatsApp, which ties accounts to real numbers, so journalists can easily call back to verify information (Barot et al. 2015). It can also involve encrypting information to facilitate contact with sources who do not otherwise feel secure (Barot et al. 2015). Chat apps can also help news organizations distribute and push news content to audiences (Barot et al. 2015). For reporters, chat apps allow them to be witnesses at a distance (Mabweazara 2011). And for large and complex stories, chat apps allow a team or reporters to share information in real-time with each other and with newsrooms (Mabweazara 2011). Journalists can also source news by subscribing to receive mass texts from protest groups and other participants in a public debate. Mass texts are particularly important in crisis situations because they allow quick access to watchdog groups (Mabweazara 2011). These new flows of information have disrupted the relationships that journalists and NGOs have had in the past (Cooper 2007).
Chat apps provide opportunities for journalists and citizens taking part in political unrest, whether it is in the form of using SMS, Voxer, Viber, and WhatsApp to coordinate and organize street protests (Skålén, Abdul Aal, and Edvarsson 2015; Stacey 2015; Mottiar 2014; Lee et al., 2015). In doing so, chat apps are becoming the sites of new trust among users (Haciyakupuglu and Zang 2015, Lee and Ho 2014). Chat apps have also been reported to foster a sense of shared identity and solidarity among participants (Treré 2015) and build social bonds (Haciyakupuglu and Zang 2015). Treré (2015) noted how “social media backstage practices” that took place on private Facebook messages and WhatsApp groups, such as sharing memes and words of support, built an enduring sense of identity among YoSoy 132 protesters in Mexico. Similarly, the sharing of confidential and trustworthy information and discussion that took place in private groups on WhatsApp and Facebook seems to create the most long-lasting social bonds (Haciyakupuglu and Zang 2015). Scholars also noted hindrances associated to chat apps such as rumors or misinformation (Malka, Ariel, and Avidar, 2015; Cottle 2011), formal (China’s Great Firewall) or informal (self-censorship in the face of surveillance) censorship (Lee and Ho 2014; Skålén, Abdul Aal, and Edvardsson 2015; Stacey 2015; see also Balkin, 2014).
Our report contributes to this literature in three ways: it explores the challenges of covering political unrest, highlights the ways reporters and news organizations are deploying chat apps as part of their news production strategies, and identifies a set of changes in crisis journalism as chat apps become a mainstay of reporting.
Written in collaboration with Jesse Woo. See previous Tow blog posts on the project here and here.
Balkin, Jack M. (2014) ‘Old-school/new-school speech regulation’, Harvard Law Review.
Barot, Trushar and Oren Eytan (2015) ‘Guide to Chat Apps’, Tow Center for Digital Journalism.
Benton, Joshua (2014) ‘Here’s some remarkable new data on the power of chat apps like WhatsApp for sharing news stories’, Nieman Journalism Lab.
Cooper, Glenda (2007) ‘Anyone Here Survived a Wave, Speak English and Got a Mobile? Aid Agencies, the Media and Reporting Disasters Since the Tsunami’, The 14th Guardian Lecture, Oxford.
Cottle, Simon (2011) ‘Media and the Arab uprisings of 2011: Research notes’, Journalism.
Haciyakupoblu, Gulizar and Zhang, Weiyu (2015) ‘Social Media and Trust during the Gezi Protests in Turkey’, Journal of Computer-Mediated Communication.
Lee, Kingyshon and Ming-sho, Ho (2014) ‘The Maoming Anti-PX Protest of 2014’, China Perspectives.
Lee, Paul S.N., So, Clement Y.K., and Leung, Louis (2015) ‘Social media and Umbrella Movement: insurgent public sphere in formation’, Chinese Journal of Communication.
Mabweazara, Hayes Mawindi (2011) ‘Between the newsroom and the pub: The mobile phone in the dynamics of everyday mainstream journalism practice in Zimbabwe’, Journalism.
Malka, Vered, Ariel, Yaron, Avidar, Ruth (2015) ‘Fighting, worrying and sharing: Operation ‘Protective Edge’ as the first WhatsApp war’, Media, War & Conflict.
Mottiar, Shauna (2014) ‘Protest and Participation in Durban: A Focus on Cato Manor, Merebank and Wentworth’, Politikon South African Journal of Political Studies.
Skålén, Per, Abdul Aal, Kotaiba, and Edvardsson, Bo (2015) ‘Cocreating the Arab Spring: Understanding Transformation of Service Systems in Contention’, Journal of Service Research.
Stacey, Emily (2015) ‘Networked Protests: A Review of Social Media Literature and the Hong Kong Umbrella Movement’, International Journal of Civic Engagement and Social Change.
Treré, Emiliano (2015) ‘Retaining, proclaiming, and maintaining collective identity in the #YoSoy132 movement in Mexico: an examination of digital frontstage and backstage activism through social media and instant messaging platforms’, Information, Communication & Society.]]>
On May 11, the New York Daily News Innovation Lab hosted “Platforms As Publishers: Where Are We Now?” as part of their Conversations event series. The panel discussion explored the relationship between social platforms and publishers, and its future implications, with specific focus on legal and business implications, and shifting resources in the newsroom, among other issues.
Following an introduction by Cyna Alderman, managing director of the Innovation Lab, Tow Center Research Director Claire Wardle moderated the discussion featuring Samantha Barry, head of social media and senior director of strategy at CNN; Allison Lucas, general counsel at Buzzfeed; Choire Sicha, director of platform partners at Vox Media; and Carla Zanoni, executive emerging media editor for the Wall Street Journal.
Starting off the discussion, Wardle asked Lucas from Buzzfeed: “Is there an argument now that [platforms] are moving into the publishing space?”
“They would argue ‘no,’” Lucas said, noting the associated legal liabilities that come with publishing. Barry added that “while they’re not necessarily going into the content making, they’re definitely becoming more and more the curators of editorial content.”
Both Barry and Zanoni acknowledged the need to meet audiences where they are, but since subscriptions are a part of the WSJ business model, Zanoni said, “I do think for us it is still important to get people back to the site.” Her approach is to build new relationships with readers on platforms and to educate them about the value that WSJ can bring. “To not be experimenting on those platforms would mean that we would be left behind,” she said. Barry added that if she focused on driving viewers back to CNN constantly, it would be “just fighting the last war.”
“I care that you have a CNN news habit,” she said, regardless of where that habit is formed.
Lucas said that this new relationship between platforms and publishers is not without risk, and there is a fear that “you’re ultimately giving your content up.” But she said the platform terms are generally not that onerous, and the best defense is newsroom education on terms and conditions. One important condition for publishers is to have the option to take down the content if necessary.
Wardle asked about the “walled garden” nature of platforms and how useful the data coming from platforms has been for the publishers. Sicha noted, across Vox sites, “One thing we see is platforms reinventing analytics over and over again.”
Zanoni said it wasn’t about the quantity of the data, but the quality, and that more work needed to be done on defining engagement and “somehow creating a common narrative around data so I don’t have to do magic on my side with an Excel file to make sure we’re comparing apples to apples.”
Overall, the panelists said, newsroom understanding of platform distribution–the implications and the possibilities–is increasing.
Sicha noted that Vox hired an engagement editor for Racked recently, and this person immediately wanted to know whether Racked would be on Instant Articles. And Lucas said that at BuzzFeed, “you do see an energy in the newsroom when new platforms are announced.”
Finally, Wardle asked, is this relationship sustainable for journalists?
Zanoni said that, for her, it comes down to storytelling. “At the end of the day, if we can remember that the journalism is the most important thing, and find the best home for that journalism, then it will remain sustainable.” Also, she said that publishers need to treat the relationship as “symbiotic,” and to take lessons learned from platforms and bring them back to the site.
Sicha said he is “extremely content agnostic” in his role to “ensure the longevity of journalism.” He looks from the outside in when devising strategies and plans. When an audience member asked how demographics play into content, Sicha said that engagement editors at Vox look “distinctly at who they are reaching, where, and what they want to do with them there.”
An audience member asked whether metrics from platforms ever factor into editorial considerations. Zanoni and Barry answered no, and Barry said, “At CNN we cover the light as well as the dark,” but that pandering to metrics would damage the brand.
Since its Journalism + Silicon Valley Conference in fall 2015, the Tow Center has been developing its research project on the evolving relationship between platforms and publishers. Preliminary results will be published soon.
Check out NY Daily New’s event recap and highlight video of the “Platforms as Publishers: Where Are We Now?” event.]]>
The full report is available to download and read at the Tow Center’s GitBook repository.
This report offers a guide to the use and significance of SecureDrop, an in-house system for news organizations to securely communicate with anonymous sources and receive documents over the Internet. SecureDrop itself is a very young technology. It was developed over the last four years, beginning during the period when the WikiLeaks submission system was down and it was unclear how else whistleblowers could safely transmit large caches of data to journalists.
The history of SecureDrop’s conception and development is thus entwined with some of the most striking moments in the recent history of digital journalism: the arrival of Julian Assange as a charismatic force calling for radical transparency; the remarkable life of the technology activist Aaron Swartz; the bravery of Edward Snowden in revealing the level of surveillance now exercised by government agencies worldwide; and the resulting alliance between journalists, activists, and hackers who wish to ensure the accountability of powerful organizations by publishing information in the public interest.
Through interviews with the technologists who conceived and developed SecureDrop, as well as the journalists presently using it, this report offers a sketch of the concerns that drive the need for such a system, as well as the practices that emerge when an organization integrates this tool into its news gathering routines.
In general, I found a fairly narrow and consistent set of practices among the journalists using SecureDrop. Many organizations designate just a handful of employees to check their system, and these employees act as operators, in a sense, who monitor the inbox and distribute promising submissions to the reporter who is best suited to assess and potentially act on that information. This is by far the most common model for the coordination of SecureDrop in newsrooms, and it appears to be so common largely because these practices were imprinted at the time of the system’s initial, guided installation by the SecureDrop developers.
Given its complexity, SecureDrop may appear at first like a radical new tool, but many reporters told me that it closely resembles many of the other channels newsrooms have traditionally made available for sources to contact them. The crucial difference is that SecureDrop restores the effectiveness of a reporter’s privilege to protect their sources through principled non-cooperation—such as refusing to testify in court—whereas pervasive digital surveillance has made this gesture effectively moot over the last decade. The reality is that when a reporter’s source can be identified through digital traces, the prosecution does not even need that reporter to testify. One of the explicit purposes behind developing SecureDrop has been to restore the possibility for journalists to protect sources whose communication devices might otherwise expose their identities.
Still, most readers must be wondering whether SecureDrop has proved worthwhile. This is a difficult question to assess because journalists are wary of revealing information that could put a source in danger. Still, most of my informants, representing nine of the ten organizations studied here, confirmed that the system has been generally valuable as a reporting tool, if not particularly consistent. Many were not willing to disclose the specific stories that originated with tips or documents from SecureDrop, nor the frequency of these stories. Nearly everyone did confirm, however, that the technical and often tedious process of checking the SecureDrop inbox is worthwhile overall, both as a reporting tool and as a signal that their organization takes seriously the protection of its sources.]]>
The News and information ecosystem is in the midst of change — again.
Mobile-first consumption is on the rise, smart homes are becoming mainstream and connected cars will soon take over the roads of major cities around the world.
Smart devices will require “smart content.” It’s only a matter of time before artificial intelligence (AI) becomes the backbone of the media industry of the future.
Today, most people find information via search or social. And while these two channels are radically different in functionality, they have one thing in common — any given article surfaced through these platforms is exactly the same for everyone in the world.
For example, The New York Times article, “Health Officials Split Over Advice on Pregnancy in Zika Areas,” reads the same to me, a 20-something male living in New York, and for an expecting mother in her mid-30s residing in Rio de Janeiro.
Content today is one size fits all. And why wouldn’t it be? A journalist writes a story hoping to reach as many people as possible.
Search and social help tailor information choices to individuals to a degree, but Google, Facebook and Twitter know that artificial intelligence will fundamentally change the equation. That’s why, since 2013, these companies have been investing substantial resources into the space and acquiring startups.
In Facebook Messenger, for example, several news organization such as CNN and The Wall Street Journal are already using bots and some level of automation to deliver news through the platform.
Artificial intelligence understands the environment it operates in and performs certain actions as a result of it. AI seeks to learn what its users want and how they want it.
In the specific case of news media, articles can be processed through algorithms that analyze readers’ locations, social media posts and other publicly available data. They can then be served content tailored to their personality, mood and social economic status, among other things.
AI allows journalists and media companies to create infinite versions of an article, resulting in increasingly relevant information that speaks directly to individuals — ultimately forming a more engaged audience.
Crystal is a program that adapts emails you write to the personality of recipients. For example, if you’re sending a note to a more laid-back person, the software suggests a change in tone from a formal introduction such as “Dear John” to a more colloquial “Hi” or “Hey.”
Crystal uses previous emails to that recipient as well as their social media posts to recommend certain language, tone and sentiment. It’s easy to see how this approach can be adapted for a newsroom –in fact, it can build off of pioneering efforts already underway such as automated earnings reports by The Associated Press. The Tow Center has also conducted an extensive analysis of the implications of automation in journalism.
Artificial intelligence can even localize stories. If you live in California, you might not read a story entitled “Texas residents poisoned by toxic waste plant for years.” But if the story included an automated note highlighting a similar past incident in your city, you would probably be more inclined to look at it.
Beyond tailoring content to users, AI can help journalists do more investigative work by analyzing massive sets of data and pointing to relationships not easily visible to even the most experienced reporter.
While this technology can improve efficiencies in newsrooms, though, it should work in tandem with journalists, not replace them. Going forward, the challenge will be to make sure we continue to adhere to our standards and ethics.
Francesco Marconi is the strategy manager at The Associated Press and a Innovation Fellow at the Tow Center.]]>
These questions are just some of the challenges that we faced as we began our research into understanding the 21st century newsroom workforce.
Technological disruption continues to impact the news industry, and many organizations are grappling with the transition to a digital environment, as well as the increasing importance of mobile and social technologies. According to the 2015 Reuters Institute Digital News Report, 25% of those sampled across the globe use a smartphone as the main device for news consumption; in the US alone, 44% use a smartphone to access the news. Further, the report found that 41% of the sample use Facebook for news each week. The findings are reinforced by Pew’s 2015 study, which found that half of Web-using adults in the US get political news from Facebook.
So how do modern newsrooms respond to these changes in audience news consumption behavior?
Our research examines changes in the nature of newsroom workforces and changes in the prerequisite skills for news company workers. Digital and data-centric roles requiring computational science and advanced analytic skills occupy a key role in the reinvented production and distribution of news. This is evident, for example, in the ways newsrooms are integrating algorithmic and automated production into traditional news processes. Moreover, a new area of professional expertise is emerging as computational skills and journalistic practice integrate (see our Tow colleagues’ work on Muck, specifically directed at reducing barriers between programmers and non-programmers in the newsroom). The new space is quickly developing, and yet our understanding of this transformation is skin deep.
Our goal is to present a systematic analysis of the challenges facing managers of modern news organizations as newsrooms adapt to increasing complexity and new skill sets in the digital news environment. Our starting point in this research is the NYC media market; this is a biased sample, as it clearly is one of the most active media markets in the country, but it’s also an opportunity to examine a market that is at the forefront of digital boundaries.
Of course, the process must begin with a defensible and representative sample of news media outlets. But again, what counts? We used a combination of databases and sources, including Cision PR and LinkedIn, in order to get a sense of the major news organizations headquartered in NYC. We focused exclusively on organizations that produce daily news, and those that are headquartered in NYC. These were necessary decisions to create a manageable yet defensible study.
This resulted in a sample population consisting of 8,027 employees from a list of 17 organizations including: TheBlaze, NowThis, Slate, The Daily Beast, Mic, Patch, FOX Business Network, MSNBC, BuzzFeed, The New York Daily News, The Wall Street Journal, The Huffington Post, FOX News Channel, NBC News, CBS News, ABC News, and The New York Times.
This provides us with a starting point for analyzing the different employment histories of the individuals working at these organizations. There are clearly a number of limitations with this study, as there are with any research. Factors such as data and resource availability mean that you need to make tough decisions about what you’re going to focus your analysis on. Moreover, we’re working with imperfect data. US Census data is outdated; Cision PR database serves another purpose, and LinkedIn data is self-reported, but these are all examples of “best available” data, and we’re working to creatively address many of the challenges associated with this project.
In order to generate automated news from these data, the first step is to ensure that the underlying data are available and of high quality. That is, you want to have data that are accurate and complete. This blog post describes our efforts in gathering these data and transferring them to a format that can be used to automatically generate news.
The PollyVote method and the underlying data are published in peer-reviewed scientific journals and are thus fully transparent and publicly available. Since the PollyVote incorporates all available forecasts in the combination, the dataset is quite extensive. For example, the data that were used to predict the 2012 election include nearly 8,000 individual daily forecasts (e.g., individual polls or model predictions). Note, however, that this figure only refers to predictions at the national (popular vote) level. If one also includes forecasts at the state level, which is our goal for the 2016 election, the dataset grows dramatically. Needless to say, this situation perfectly meets to conditions under which automation is most useful: if (a) there are good data available and (b) a large number of routine news stories need to be written.
For generating automated news stories, we collaborate with the German company AX Semantics, which is responsible for developing the underlying algorithms. Therefore, a first challenge within our project was to develop an interface through which AX Semantics can automatically obtain the PollyVote data in a structured (i.e., machine-readable) format. To allow for this possibility, project member Mario Haim developed an API, which contains both historical and 2016 forecast data for the combined PollyVote as well as its components at the national and the state level. However, access to the API is not limited to our project partners. Instead, in an effort to make our procedures fully transparent, we decided to make all data publicly available and free to use under the MIT license. Interested users may obtain data through a specific URL, and a dedicated API call generator allows for specifying an exact request. Details on the data as well as instructions for how to obtain them can be found here. Also, note that this is work in progress. Please write to us if you find any errors in the data.
In the next post, I will describe our approach for generating automated news articles, some of which have already been published in both English and German language. Note, however, that we are still early in the process. The quality of the texts will further improve. Yet, we decided to start publishing right away so that users can track how the texts have improved over time.]]>
(Photo credit to Andrew DeVigal)
Last week I had the pleasure of attending the International Journalism Festival in Perugia, Italy – a distinctly medieval town wherein we discussed decidedly modern issues about the future of journalism. In its 10th yearly incarnation, the event attracts throngs of journalists from across Europe, and the world, eager to learn about the latest in the field. I participated in two panels, one called “Can a Robot Do My Job?” (No, obviously) and another about “Creating Community”, which I’ll recap in this post (you can also watch the video here). Creating Community was a discussion between Greg Barber (The Washington Post), Mary Hamilton (The Guardian), Mathew Ingram (Fortune Magazine), myself, Nick Diakopoulos, (University of Maryland), and moderated by Federica Cherubini.
Greg kicked off the panel by defining community as an “interaction among people” including sharing of information, and acknowledging the range and diversity of types of communities that emerge around news information. Mary spoke about serving different types of community on The Guardian including communities around shared circumstances, desires, aspirations, and needs. Each of those types of communities can be served in different ways, sometimes by meeting and finding communities that are forming in other (off-site) places. Mathew spoke about the ideal or fictitious community that we all think exists, but doesn’t really. Communities are filled with real human beings, some of whom are flawed. The struggle of media companies is to deal with the community they have rather than some ideal community.
For my part I spoke about some of the ways that algorithms might be able to shape community. In particular to help (1) filter out the low quality (profane, vulgar, inappropriate) stuff, (2) surface and highlight the really great comments, (3) identify and build social context that can inform moderation decisions, and (4) alert moderators to threads or articles where “having an adult in the room” might be good for getting the conversation back on track. The Coral Project is working on some of these problems, and other tools, like KeepCon can help with moderating out the low end. My own project at UMD called CommentIQ is focused on trying to automatically rank comments based on editorial quality criteria so that really good and interesting comments can be surfaced.
A key point of discussion that emerged was what to do about the context collapse of so many different types of communities operating in what is essentially the same public space on news websites. From a design perspective you can have small communities and large communities, communities with high turnover or low turnover, and indeed communities that only need to be around for a day versus those that could last a lifetime. Different people can have vastly different motivations for being in an online news community: some want more facts, others want to be entertained, or to test their opinion out and develop their identity. Essentially there are a multitude of communities on any given news site, and each of those communities might need different feeding and care. Each of these types of communities potentially demands a different suitable design such as in the way contributions are moderated, in the use of “real” names or anonymity, and in the “rewards” that people are seeking in their community interactions. In terms of designing an online community it’s hard to be everything to everyone all the time.
Underlining the discussion about design challenges for creating a successful sociotechnical commenting system was this simple observation: News organizations first need to define what they want to do with their online communities. What’s the strategy? It’s only when news organizations get past the simple conceptualization of comments as something you hang off the bottom of the page, and start thinking about them as something that’s there strategically, that they’ll be able to reap the benefits and rewards of cultivating relationships with the individuals in those communities.]]>
On March 24, the Tow Center launched “The Curious Journalist’s Guide to Data” – a research project led by Tow Fellow Jonathan Stray. (The report is available to download and read at the Tow Center’s GitBook repository.) In this book, Stray examines the principles behind data journalism and, more broadly, the fundamental ideas behind the human tradition of counting things.
The launch event, held at Columbia Journalism School, featured a presentation of the report followed by a panel discussion with Meredith Broussard, Assistant Professor at the Arthur L. Carter Journalism Institute of New York University; Mark Hansen, Director of the David and Helen Gurley Brown Institute for Media Innovation; and Scott Klein, Assistant Managing Editor at ProPublica. A full audio recording of the discussion is available on SoundCloud.
The event started with a short presentation by Stray, who explained why journalists in particular will benefit from adding quantitative concepts to their toolkit: “Sometimes you look at a chart and you think you see the story, but do you really? There’s more than one story you can pull out of a dataset. In fact, there’s more than one story you can pull out of a single number. Which one do you report? That’s a point of journalistic ethics. The story that you can’t prove wrong is your best shot. If you want to prove that something didn’t happen by chance, calculate how unlikely it is.”
In the panel discussion that followed, Mark Hansen noted, “Every discipline on campus is seeing its core artifacts digitized and opened to some kind of data analytics, whether we’re talking about History, English, Architecture, Business, and so on. In journalism, as part of this larger process, we have tremendous possibilities to tell stories in dramatically new, engaging, and frankly beautiful ways, that have nothing to do with spreadsheets. We should look for data opportunities, and it doesn’t always have to mean going to the census – there are so many things we can bring in.”
“Data is socially constructed,” said Meredith Broussard. “One of the things I say to my students is that working with data is not the same as doing pure math. Data is about people counting things, so if you can understand people, you can understand data.”
When asked by the audience about the option of relying on expert statisticians, Scott Klein said, “We are starting to see more complicated models, but I would say, know how to calculate the odds. That’s a basic, basic skill that will tell you if something that happened is unusual. That will solve a lot of problems in your stories.”
Broussard added, “I recommend the buddy system – having a buddy that you can talk to about all the different kinds of topics that you walk into when you’re a journalist. Have a buddy who’s an accountant, a lawyer, a doctor, a mathematician. When I get stuck on something as a journalist, or when I feel like I’m in it over my head, I call my buddy who’s a mathematician and we talk about it.”
Panel members were then asked how news teams can render complicated graphs and figures on mobile phones, where more people are likely to read the story. “My only advice is to care enough to do it,” said Klein. “It’s a long process of testing, and it’s worth doing. I think that there is no such thing as mobile content. Wherever it is I’m reading at that moment, I want to know the same things.”
Efrat Nechushtai is a current Ph.D. candidate at the Columbia University Graduate School of Journalism.]]>
The report is available to download and read at the Tow Center’s GitBook repository.
Substantive local news is a rare commodity in many communities across the United States. For areas with high levels of violence, crime, and poverty, this absence can be compounded by a history of stigmatization. Often the only local news available is negative.
This report explores potential impacts of local solutions journalism, particularly for underrepresented and stigmatized communities. Solutions journalism explores responses to systemic social problems—critically examining problem-solving efforts that have the potential to be scaled.
Proponents of this genre suggest these stories offer a pathway to engaging audiences. Preliminary research suggests readers of solutions-oriented stories may be more likely to share articles and seek related information.
However, little research has explored solutions journalism at the local level or in stigmatized communities. This study attempts to address this gap. Following a community-based media project in South Los Angeles, six focus groups were held with 48 African American and Latino residents examining how participants responded to the solutions journalism format.
Study findings illustrate how residents navigate and critically interpret mainstream local coverage—often using alternative digital sources to cross-check stories and seek information. The study also suggests that these residents will respond positively to solutions journalism —though participants’ enthusiasm may be tempered by larger concerns regarding structural inequalities. Participants suggested they would be more likely to seek out news and share stories if solutions journalism was more common, and many suggested these stories helped them envision a way to become personally involved in community problem-solving.]]>