Tow Center Announces Research Director and 2017 Fellows

Jonathan Albright Joins Tow Center for Digital Journalism as Research Director

The Tow Center is pleased to announce Dr. Jonathan Albright as its new Research Director. Jonathan’s research around networks of propaganda and misinformation has recently captured attention across the world. His research into the use of platforms such as YouTube to proliferate high volumes of automated misinformation has been featured across a broad range of publications including The Guardian, The Washington Post, and Fortune.

His work lies at the intersection of communication, culture, and technology, focusing on the analysis of online and socially mediated news events and activism, data-driven journalistic methods, and visual storytelling. Jonathan joins the Tow Center from Elon University, where he is an assistant professor of media analytics in the school of communication. In his role as Research Director, Jonathan will lead the Center’s fellows and research projects, working closely with Tow Center Director Emily Bell.

We are extremely excited that Jonathan is coming to work at the Tow Center, bringing with him his cutting-edge research into the new ecologies of journalism and misinformation. There is no more pressing issue in the field right now and Dr. Albright’s work will add to the Tow Center’s reputation for examining emerging trends in technology and how they apply to the field of journalism. Jonathan’s understanding of how technologies are being deployed and networked through social platforms to create an ecosystem of targeted misinformation is central to understanding current issues affecting both politics and journalism.


Tow Knight Projects and Senior Fellows Focus Tow Center Agenda On Investigating the News Environment of the Social Web.

This new cohort of Knight News Innovation Fellows at the Tow Center brings a wealth of expertise in examining some of the most timely and important issues facing journalism today. They will pursue a range of research topics, including automated journalism, collaborative journalism, information integrity, local journalism, political polarization, and the General Data Protection Regulation (GDPR).

These new fellows join over 60 current and former fellows at the Tow Center. The Fellowship projects are funded by the John S. and James L. Knight Foundation. Read more about all Knight News Innovation research projects at the Tow Center here.


2017 Knight News Innovation Fellows Projects:

Fact Trust

Mike Ananny, Assistant Professor of Communication and Journalism, Annenberg School for Communication and Journalism, University of Southern California

How does a Facebook-led partnership of news organizations and fact-checkers mix algorithmic and editorial judgment to fight “fake news”? Through interviews with key personnel and analyses of documents and infrastructures, this project tells the story of how techno-journalistic platforms make facts. Better understanding such hybrids helps scholars, technologists, journalists, and audiences appreciate how to trust and critique news networks—and how to think about and reconfigure power between publishers and platforms.

Engagement with Robot News: How Automated Journalism Affects Credibility and Engagement

Jan Boehmer, Assistant Professor of Journalism in the College of Communications at Pennsylvania State University

Technological advances and societal transformations have shaken up the journalism industry. One of the most disruptive examples of this change is the emergence of automated journalism. While a growing number of news organizations rely on algorithmic processes converting data into narrative, the effects on the audience are not fully explored. This research investigates how attributing authorship of news items to an algorithm affects readers’ perceptions of credibility and intentions to engage with the content.

Collaborative journalism and the creation of a new commons

Carlos Martinez de la Serna, Director of Digital Innovation at Univision News

The SF Homeless Project, the News Integrity Initiative, and ElectionLand are three major examples of an emerging pattern in journalism: the cooperation of multiple organizations and individuals to address big challenges at a scale that no single organization could by itself. This project will research how the combination of decentralized, networked, and traditional models for news production and distribution are creating new opportunities to support journalism.

Partnering with the Public: How ‘Audience Engagement’ is Reinventing Local Journalism

Jacob L. Nelson, PhD Candidate, Northwestern University

This project explores the way that three news organizations (City Bureau, Hearken, and The Chicago Tribune) conceptualize, implement, and measure audience engagement. At a moment when the news media’s credibility and economic sustainability are in doubt, this project examines what journalists in both traditional and innovative newsrooms believe “success” should look like. In doing so, it attempts to answer the question: Are journalism’s goals changing, or just its methods?

From Polarization to Public Sphere

Andrea Wenzel, incoming Assistant Professor, Temple University with Sam Ford, media executive and consultant

This research study examines what political polarization and urban-rural divisions look like in the daily lives of residents at the local level. The project focuses on a case study of a region of Kentucky, including the “purple” college town of Bowling Green and the more “red” and rural area of Ohio County. Drawing from interviews and media diaries, the study examines the communication ecologies of residents and the potential for community engagement across demographic and ideological lines. The study will also explore challenges and opportunities in the rural media landscape through a workshop with local and regional media and community stakeholders.

Bridging Stories: Countering Misinformation in Chinese Language News Ecosystem

Chi Zhang, Doctoral Candidate, Annenberg School for Communication and Journalism, University of Southern California

This project investigates and intervenes in the immigrant Chinese news ecosystem, which has seen significant misinformation, to bridge the information silos between Chinese-speaking immigrants and their surrounding community. In collaboration with Alhambra Source, a trilingual civic news site serving the immigrant majority city of Alhambra, and Asian Americans Advancing Justice-Los Angeles, we monitor ethnic Chinese media and social media outlets, and engage community members to produce and distribute bridging stories.  

The General Data Protection Regulation in a media context: threat or opportunity for media companies?

Hugo Zylberberg, Cyber Fellow, Columbia University’s School of International and Public Affairs with Susan E. McGregor, Assistant Director of the Tow Center for Digital Journalism

The General Data Protection Regulation (GDPR) will go into effect in May 2018 in the EU, yet most companies including media companies still know very little about its implications for their business models. The business models in the media ecosystem have rapidly evolved in the last couple of decades and traditional players have been threatened by new entrants. This project will not only explore the many ways that this imminent legislation will affect media companies, as well as the technology platforms upon which they increasingly depend, but also look at how in return the media could seize this regulation as an opportunity to leapfrog over the digital transformation.

Senior Research Fellow :

Award-winning data journalist Jon Keegan joins the Tow Center in 2017 as a Senior Research Fellow from The Wall Street Journal where he led projects in data journalism and visualization. In the past year Keegan built WSJ’s award-winning “Blue Feed, Red Feed” which visualizes political polarization on Facebook. At the Tow Center, Jon will be leading an initiative to explore partisan sources on social media.

This project—the first in a suite of tools for consumers of news on social media—will build an open database of popular news sources on Facebook, illustrating their reach across platforms, surfacing data about the owners, advertising networks, authors, and affiliations. This will take the form of a user-friendly public website, as well as an API so other developers can build tools that use this database to illuminate the murky world of partisan news on social media. This project aims to empower the public to be more responsible about the news they share with their networks, as well as increase media literacy around online news sources.

Jon joins Senior Research Fellows Pete Brown, Elizabeth Hansen, and Andrea Wenzel.



The Fellowships are part of a $3 million research program funded by the Knight Foundation. Since the program began, the Center has published a number of reports as well as shorter guides on key trends including automated journalism, chat apps, and podcasting. The Tow Center also hosts large-scale conferences and smaller, skills-based workshops to further conversation around the published research.

The Tow Center offers fellowships to academics, journalists and technologists, disseminating research for application in newsrooms as well as classrooms. For more information, please email

About the Tow Center for Digital Journalism

The Tow Center for Digital Journalism, established in 2010 through gifts from the Tow Foundation and others, provides journalism students with the skills and knowledge to lead the future of digital journalism and serves as a research and development center for the profession as a whole.

About Knight Foundation

Knight Foundation supports transformational ideas that promote quality journalism, advance media innovation, engage communities and foster the arts. We believe that democracy thrives when people and communities are informed and engaged. For more visit,


Symposium: Next Gen Podcast Distribution Protocols

MAY 11, 2017, 9am–5pm
Harvard Law School
Wasserstein Hall
1585 Massachusetts Avenue
Cambridge, MA

Next Gen Podcast Distribution Protocols: Innovation and governance in open development initiatives

Presented by the Berkman Klein Center for Internet & Society at Harvard University and the Tow Center for Digital Journalism at the Columbia Journalism School, in Collaboration with the Open Working Group



On May 11, 2017, the Berkman Klein Center for Internet & Society and Tow Center for Digital Journalism will host and facilitate a symposium, in collaboration with the open working group, to address the process of developing standards that support the distribution of syndicated audio content.  The event will look back at the evolution of the RSS protocol and look forward at the need for new technical infrastructure to support an expanding podcast distribution landscape.  Participants will have the opportunity to engage in both higher-level policy discussions and technical deep-dives throughout the course of this one-day event.

The goals of the symposium include furthering cooperation among various players in the world of podcast creation and distribution and consideration of recommendations on standards, enhancements, extensions, and other methods to support the growth of podcasting as an open and inclusive medium.  It will bring together academic, non-profit, and commercial constituencies to address, among other things:


  • the history of media protocols;
  • promises and pitfalls associated with open development initiatives;
  • rights issues relevant to openly syndicated content;
  • questions of governance and stakeholder engagement; and
  • technical planning and implementation for next generation podcast distribution

The symposium will mix talks and panels that generally address these issues (curated by the Berkman Klein and Tow Center teams) with opportunities for breakouts that allow deeper dives into technical questions around distribution protocols for podcasts and other forms of serialized media (facilitated by members of the community).

Registration is limited; sign up here.  The symposium will be followed by a separate, two-day “Audio for Good” event, co-hosted by PRX, RadioPublic, and the HBS Digital Initiative.  Applications to participate can be submitted here.



In the last two years, podcasting has hit a tipping point in mainstream adoption. Over fifty-seven million people listen to podcasts each month in the US, growing at over 25% per year. A rapidly expanding industry and ecosystem is taking shape across content creation, publishing, distribution, discovery, and monetization. Apple remains the largest platform for podcasting, with major players like Google, Spotify, Audible, and Pandora beginning to integrate podcasts into their services. Independent creators, content networks, and podcast apps and a variety of service providers are starting to arrive. Public radio remains a foundational force, dominating the charts with shows from NPR, PRX, WNYC, This American Life, and others.

There is also a growing number of industry conferences, events, and associations starting to address myriad needs in the podcasting space, including Podcast Movement, the Podcast Summit, Third Coast International Audio Festival, in addition to an uptick in live events for podcast fans in venues across the country.

Growth in content, audience, and revenue is intensifying the competitive landscape, with resulting pressure to address problems related to metrics, metadata, advertising, audience insight, and more.

These are unique and exciting challenges in developing technologies that power an open standard like podcasting. From the basic and ubiquitous formats that have come to be relied on, to recent advances like dynamic audio serving and advanced metrics and analytics, there is a wide array of topics that must be addressed.  

This convening seeks to provide a forum in which to discuss specific technical details that relate to podcast distribution and to learn from and compare notes with people who have been deeply involved in questions of governance, standard-setting, and open innovation across a wide variety of fields.  A primary goal for the community involves establishing processes and developing timelines for future development initiatives.



The Berkman Klein Center for Internet & Society is a research center based at Harvard University.  The Center’s Center’s mission is to explore and understand cyberspace; to study its development, dynamics, norms, and standards; and to assess the need or lack thereof for laws and sanctions.  Berkman Klein is a research center, premised on the observation that what it seeks to learn is not already recorded. The Center’s method is to build out into cyberspace, record data, self-study, and share. Its mode is entrepreneurial nonprofit.

The Tow Center for Digital Journalism, established in 2010, provides journalists with the skills and knowledge to lead the future of digital journalism and serves as a research and development center for the profession as a whole. Operating as an institute within Columbia University’s Graduate School of Journalism, the Tow Center is poised to take advantage of a unique combination of factors to foster the development of digital journalism. Its New York location affords access to cutting-edge technologists, a strong culture of journalism and multiple journalism and communication schools, with outstanding universities attached to them. The Tow Center is where technology and journalism meet, and where education and practice meet. is a community-driven working group with a mission to ensure that podcasting grows to meet the needs of listeners, creators, producers, publishers, advertisers, and developers, without sacrificing the groundwork that has been established to make it an open and inclusive medium. The goal of the working group is to develop clear and comprehensive standards and best practices. The group now includes more than 100 representatives from a growing number of podcast industry stakeholders, including international participants, and intends to incrementally release updates to existing standards and recommendations for new proposals.

Platforms and Publishers

On the Design of Hyperlocal Communications

At a time when the accuracy of news sources is in question, and politicians want to restrict access to information, hyperlocal communications have more potential than ever. You Are Here is a project that essentially consists of a Wifi signal that broadcasts a single website. Users can connect to it on their cell phones to leave comments on a curated topic. The project was designed by a team of Tow Fellows including Sarah Grant, Susan McGregor, Benjamen Walker, Dan Phiffer, and Amelia Marzec. Two installations of the project occurred in Tompkins Square Park and the High Line. These are some observations on the challenges of attracting users to a hyperlocal project.

you are here app
The public expectation of an open Wifi signal is simply that there is potential to connect to the Internet for free. If they find they aren’t able to complete their specific task, which may be to post to Instagram in the park, they will drop off and look for another option. This is the biggest hurdle for any Wifi-based project. You Are Here relies on messaging through posters at the sites of installation to instruct users on how to encounter the project. The audiences and needs in those locations–a barber and a restaurant–are different than the audiences that appear in the interviews for the curated content, which are parkgoers to Tompkins Square Park and the High Line.

on location
The design of the system must fit the needs of the community. One of the requirements for success is to go into an existing community and find out what their specific needs are, rather than designing a system and then trying to build a community around it. This was a challenge with You Are Here. Hyperlocal communications would be most beneficial when there is a driving need for us to rely on our neighbors.

Designing the systems that carry information must be considered from every angle. A couple of the previous projects by the team members of You Are Here were designed as systems of protest: by Dan Phiffer, and Signal Strength by Amelia Marzec. Inherent in their design is the expectation for their use in a democratic setting, where users would set the tone of the conversations. These methods focus on the decentralization of information, and are less effective when used in a setting that requires top-down information.

Recently considered for the purposes of advertising, You Are Here and similar systems lack significant traffic that would validate them for use in a corporate setting. The failures of these systems as money-making machines contain the potential for them to be used for their original purposes—to give rise to citizen journalism and peer-to-peer communications.

For the long term, users need to care about the content and their interactions with each other. Within commenting systems, users will return when they are emotionally invested—even if that means they are having an argument online. Would an old-fashioned bulletin board be more effective? One benefit of that would be that people without access to smartphones would be able to participate.

In the event of a true emergency, with no connection to the internet, You Are Here could be a valuable addition to a specific local community, possibly if it were able to operate as a forum. Residents would be able to share how they got access to medical care and ATMs, or to discuss rumors from the outside. As we move into uncertain times regarding the use and delivery of information, we will need to be vigilant in finding the truth tellers in our communities that are focused on the greater good.

New Ideas for Automation in the Newsroom


How can automation help reduce the cost of story discovery in a newsroom? As a Tow fellow in 2015, I had the opportunity to explore this question by building a tool, available online at, that automates some of the exploratory data analysis associated with investigative work.

Newsrooms need automated investigative tools now more than ever because their human investigative resources have been decimated. Since the 2008 crash, among the most severely cut departments have been the ones that do long-term, in-depth investigative reporting. Enterprise ideas are the hardest kinds of story ideas to come up with: you don’t know what you don’t know about what’s going wrong in our public institutions. Most investigative stories are reactive, meaning they result from a tip from a whistleblower, versus being proactive, resulting from the reporter’s own inquiry. Since newsrooms were cutting investigative resources, I wondered if we could build computational systems to help the remaining reporters do their jobs more easily and proactively discover story ideas in public data.

During my Tow fellowship, I explored whether a kind of custom software called a Story Discovery Engine could facilitate enterprise reporting on campaign finance issues. (It can.) I was curious about campaign finance in anticipation of the 2016 US presidential election. I had heard a lot about dark money and superPACs in the wake of the 2010 Citizens United decision, but I knew there was a vast amount I didn’t understand about this complex system. However, I did know that reporters who work with campaign finance data spend a lot of time doing the same routine things: downloading data from the FEC and other sources, cleaning it, associating data with known entities, and building basic visualizations. Any time you have routine processes, there is an opportunity for automation.

The last time I built a Story Discovery Engine, I optimized it to find a story in education data. This time, instead of starting with my own story idea, I started by interviewing other journalists. I specifically asked about the kinds of stories they look for, and what the indicators are that suggest a story might be hiding in the data.

To an outsider, these indicators are almost impossible to spot. But to these campaign finance gurus, the signs of corruption were clear as day. One common indicator is administrative overspending. Nonprofit organizations are required to report their income and spending. If the organization spends an unusually high percentage of its income on administration, it is often an indicator that something is amiss internally, and there is likely an opportunity for a story.

However, deciding what percentage is “unusually high” is a judgment call. It is also a judgment call to determine whether there is a story worth pursuing. Some fluctuation in administrative expenses is normal. There might be a perfectly good reason for an organization to have unusually high administrative expenses; a high percentage does not necessarily imply corruption. This ambiguity is the reason that it is unwise to build a system that claims to automatically identify investigative story ideas. It would be unfair (not to mention unethical) to accuse an organization or a public servant of misdeeds based on a naïve computational analysis. It requires human decision-making to fully consider what is going on in a given situation.

This human component is essential for newsrooms to remember. Computers can’t independently determine corruption. Currently, a human is necessary in any automated investigative system. Instead of a fully automated investigative system, in this case I built a human-in-the-loop investigative system. Most people dream of full automation: cars that drive themselves, robots that deliver packages. I don’t dream of this. I’m fine with a world that includes people. I like human judgment, as flawed as it is. I like the drama and the idiosyncrasies of human systems. The difference between a fully autonomous system and a human-in-the-loop system is like the difference between a drone and a jet pack. The drone is autonomous: it is programmed to go to a particular location, drop a bomb or take a picture, and then come back to base by itself. A jet pack (in theory) is designed to be strapped onto the back of a human being in order to accelerate the human’s effort. Both are legitimate system models, and each is useful for a different type of task.

Bailiwick, the system that I built, automates some of the grunt work associated with downloading FEC data, cleaning it, putting it into a database, organizing it into recognizable categories, and creating simple visualizations. The visualizations, plus the knowledge engineering layer that organizes the data into recognizable categories, allow a reporter to quickly make sense of complex data. Bailiwick analyzes data for each of the thousands of 2016 federal candidates and for 17,000+ active political committees. There is also an alerting function that allows a user to set up a personal profile of candidates or races to follow. A Pennsylvania reporter, for example, could choose to follow the two frontrunners in the PA Senate race and a handful of frontrunners in PA House races. Bailiwick sends an alert to the reporter via a private Slack channel every time there is a filing by a candidate the reporter follows. Setting automatic alerts, via Slack or a service like IFTT or Zapier, has long been known as an effective way to use automation in the newsroom.

Bailiwick is not small. As of January 1, 2017, the system included 11,032 lines of Python; 13,158 lines of HTML; 40,113 lines of JavaScript; 13,601 lines of CSS; 2.9 million lines of text; 2,892 lines of markdown; and approximately 94.2 million records in the database. The number of records increases every time new filings are added to the FEC site. The system gathers small updates from FEC every night, and then it completely refreshes all of its data from FEC once a week.

Bailiwick is not designed for beginners, but rather for experienced reporters who are specifically looking to find stories in campaign finance data. Top-tier news organizations like the New York Times or ProPublica or the Center for Public Integrity have software developers on staff who build custom, non-public-facing software for reporters to use on campaign finance data. Bailiwick is designed for organizations that don’t have internal developers.

For an investigative project, Bailiwick was relatively inexpensive to build. In Democracy’s Detectives: The Economics of Investigative Journalism, James T. Hamilton outlines some of the costs and benefits associated with complex investigative projects. He estimates that “Deadly Force,” the Washington Post’s 1999 Pulitzer Prize-winning story on D.C. police shootings, cost about $487,000 (in 2013 dollars) to create. Hamilton writes: “While accountability reporting can cost media outlets thousands of dollars, it generates millions in net benefits to society by changing public policy.” Watchdog reporting is good for society, but measuring its impact and its cost is not straightforward. The starting price varies by industry as well. Video is more expensive than print or digital: production costs for a single hour-long news documentary for a show like PBS Frontline start around $500,000 and can go upwards of $1 million.

A news app like Bailiwick starts at about $50,000 in up-front software development costs. It currently costs me about $1,000 a month to maintain; I plan to offer it for free online for about a year. Newsrooms are likely hesitant to commit to spending $50K on custom software projects. In the future, newsrooms could consider joining together to fund similar projects.

It is clear that news apps like Bailiwick can help newsrooms to produce more data-driven investigative work and can lower the cost of discovery for these investigative stories. Newsrooms who want to adopt this kind of automation will need to commit to up-front costs that are not insubstantial, and will need to plan for a development time frame that is longer than the customary time frame for daily or weekly news production. However, newsrooms excel at developing and meeting production timelines, so this challenge will likely not be an obstacle. It is also helpful to think about automated investigation as a kind of artisanal production. Data-driven enterprise stories are not easily mass-produced; they are artisanal products. Small-scale automation using human-in-the-loop systems is a cost-effective way to increase production of these high-quality stories, just as a small bakery might buy a large-capacity mixer in order to produce more loaves of bread. As newsrooms increase their production of watchdog reporting in the public interest, society will benefit.

Image via Sakena on Flickr.

Help Support and #FreeThePress in 2017

In December 2016, the Committee to Protect Journalists counted 259 journalists imprisoned worldwide. On January 25th and 26th, we hope you will join our effort to #FreeThePress and support CPJ.

Journalists and others imprisoned for “crimes” like exercising free speech have said that communication from the outside world is a vital form of support. As a Chinese activist once put it:

“When letters and cards arrive at the prison like snowflakes, the prison knows that a lot of people are paying attention to these prisoners.”

With input from the Committee to Protect Journalists, the following group of organizations are holding postcard-writing and social media campaigns to highlight the situations of seven imprisoned journalists. We hope you’ll join us by lending your support on social media.

cjs_square #cjsglobal-3edited1 TowCenter-CSJExternal-NoHeart
TMAC logo_no outline PhilipMerrillLogo_Fearless


Ethiopian journalists work in some of Africa’s most restrictive conditions. Of the 16 journalists imprisoned there in December, two have spent a decade behind bars.

Darsema Sori, Radio Bilal

Darsema and a colleague at the faith-based Radio Bilal were arrested in early 2015 and charged with 18 other defendants for allegedly inciting extremist ideology and planning to overthrow the government. Darsema is senior editor at the radio station, which extensively covered the Muslim community’s protests of a number of government actions – including the closing of Ethiopia’s only Muslim college. 


A year ago, there were no journalists imprisoned in the Americas. But this December, CPJ identified journalists jailed for their work in Cuba, Panama and Venezuela.

Braulio Jatar Alonso, Reporte Confidencial

Jatar was arrested in September a day after he wrote about protestors who greeted President Marduro with jeers and by banging pots and pans. Authorities claimed he was found with thousands of dollars in cash to be used for a “terror attack.” A legal rights lawyer charged that authorities planted the money and arrested Jatar, who is also a lawyer and political activist, for posting videos of the protest on his site.


China has ranked among the world’s worst jailers of journalists for years; 38 were in prison in December, as Beijing deepened its crackdown on coverage of protests and human rights abuses.

Ilham Tohti, Uighurbiz

Tohti is a Uighur scholar, writer and blogger who was arrested in early 2014. Authorities closed his website, which published articles on social issues in Chinese and Uighur. Tohti was sentenced to life in prison for promoting Uighur separatism on his website. He denied the charges, and his sentence was protested by the U.S. State Department, the European Union, and human rights organizations. 


One of the riskiest areas for journalists in India is the Bastar region in the state of Chhattisgarh, epicenter of the conflict between Maoists and security forces. Pressures come from both sides in the conflict and are most severe for those reporting there full-time.

Santoch Yadav, freelance

Yadav, a freelancer in the Bastar region, reports for several local dailies, often on allegations of human rights abuses by police against tribal communities. In 2015 he was arrested and charged with rioting, criminal conspiracy, attempted murder, and supporting and aiding terrorist groups. His colleagues said the charges were fabrications brought in retaliation for Yadav’s human rights reporting.


Following the failed coup attempt of summer 2016, the Turkish government intensified its media crackdown, shutting more than 100 news outlets and arresting dozens of journalists. The 81 imprisoned in Turkey in December accounted for nearly a third of the global total of imprisoned journalists.

Musa Kart, Cumhuriyet

Cartoonist Musa Kart was one of 12 staff and board members of Cumhuriyet, Turkey’s oldest newspaper, detained in October.  An official statement said they were suspected of producing propaganda for the Kurdistan Workers’ Party (PKK) and for what the government calls the Fethullah Gülen Terror Organization (FETÖ), two rival groups the government labels terrorist organizations. No date is set for Kart’s trial. 


CPJ’s December census marked the first time in eight years that Iran was not among the five countries with the largest imprisoned journalist populations. But eight journalists remained jailed in December, two of them arrested in 2016.

Issa Saharkhiz, freelance

Saharkhiz has been imprisoned three times in seven years. His most recent arrest was in 2015, which led to his conviction for “insulting the Supreme Leader” and a three-year prison sentence, later reduced to 21 months. Saharkhiz is a prominent journalist who contributed to the opposition website Rooz Online. He still faces charges of “insulting the head of the judiciary.” 



Egypt had the third-largest population of imprisoned journalists – a total of 25 – in CPJ’s 2016 census, after Turkey and China.

Mahmoud Abou Zeid (Shawkan), freelance

Freelance photographer Abou Zeid was detained in 2013 while covering clashes between Egyptian security forces and supporters of ousted President Mohamed Morsi. After more than two years of pretrial detention, he was charged with weapons possession, illegal assembly, murder, and attempted murder. His trial was ongoing in late 2016, In November, he was honored in absentia with CPJ’s International Press Award.  

Twitter and managing journalistic work: Between distraction and optimization


Note: This is the fourth post in the Beyond 140 characters series, which investigates how, why, and under what circumstances political journalists engage with Twitter. This piece shares some of the project’s key findings. The previous post reflected on how the nature of news events shape political journalists’ Twitter engagement.


Multi-platform and digital storytelling, understanding metrics and audiences, interactivity, and branding—these are just some of the issues facing journalists today, all while using tools and techniques that weren’t possible ten years ago, and with the added pressure of identifying a unique angle or scoop. If only some of these tasks are to be achieved and well executed in a given work shift, 24 hours in a day may not be enough. This is a theme—one of managing the many tasks and workflows—that has emerged in my interviews with 26 political journalists who work for some of the top legacy media organizations in the U.S.

Twitter has become an established journalistic resource, one that reporters are expected to constantly and actively utilize (as I discussed in a previous post on organizational pressures and institutional policies). Early skepticism appears to have morphed into conventional wisdom that, to use one journalist’s own words, “if you’re a journalist and not on Twitter, you’re a dinosaur.” But the story is not that simple. Rather, it is one of constant negotiation and evaluating opportunity costs.


Twitter as a distraction from core tasks

Among the journalists I spoke to, there appear to be two dominant, but opposing perceptions about Twitter’s place in everyday workflows.

The first narrative is one of Twitter as a distraction, where it competes for attention with traditional operations and production goals. Resources are limited, and the more space Twitter takes up in a journalist’s day, the less disposable time and energy remains for the items he or she needs to deliver. One journalist extensively talked about Twitter as a constant stream of overwhelming information and described his experience with the platform as follows:

It’s sort of like telling me that I should be watching 15 TV channels at the same time. I’m sorry, but I’m going to report and write and give you something of value, and I’m going to turn all that crap off that’s not going to matter to me.

Many others provided examples of distraction and information overload, which are taking a toll on productivity. Individuals with such experiences tended to engage with Twitter selectively and developed an increased awareness of the need to switch off. These journalists unanimously said they benefited from being more focused on their work. After all, as many highlighted, good writing is a craft and to do it well takes time and attention. One journalist explained his frustration:

We’re getting hit every week with a new tool or something else that’s cool. I got an email yesterday saying, ‘Hey, come learn Instagram.’ And then last week it was ‘Hey, come learn something else’. Are you kidding me?

On the flip side, minimizing distraction clearly involves opportunity costs. Many journalists are aware of this tradeoff, but have come to terms with it as the lesser of two evils. For example, not logging on at all or turning off push notifications can lead to missing breaking news or content shared by key individuals, especially politicians and their operatives, who are amongst the keenest adopters of Twitter as an immediate and interactive tool for public dialogue, outreach and mobilizing. These journalists risk falling a step behind colleagues and competitors, especially so when news might not be available elsewhere.

While news organizations may be willing to trade occasional Twitter abstinence for a better journalistic end product, things become trickier when journalists’ personal preferences immediately challenge their employer’s overall social media strategy. For example, how does management deal with those journalists who engage infrequently or inconsistently, when Twitter has long become an integral channel of content distribution for that news organization? Or how does one reconcile the incompatibility of the brand a journalist cultivates on Twitter with the core values and cultures of his or her employer? One journalist admitted:

I’ve kind of made my peace with it. It is what it is. If I don’t get a job because I don’t have a Twitter presence then, well, I don’t get a fucking job. That’s fine… But hire me or fire me for the stories, not Twitter or my tweets. I’m sure my bosses would like me to tweet more, but frankly, it’s just not a priority.


Twitter as a tool to optimize workflows

The second narrative is an opposing and more optimistic one, of Twitter as a tool to manage and optimize workflows. The factors that shape such perceptions aren’t dissimilar from those discussed above: journalists also referred to editorial and temporal pressures to produce content, for example, or an institutional drive towards branding or creating multi-platform visibility. But rather than considering Twitter a distraction from any one of such objectives, journalists view the platform as an extension of or an addition to the available means of achieving these. What sets those individuals apart is how they appropriate Twitter in such a way that it “fits into” the tasks and procedures they are already working on.

The vast majority of journalists I spoke to were generally hesitant to pinpoint concrete, realized benefits of their engagement with Twitter, largely due to the difficulty of assessing both short- and long-term outcomes. Yet, four distinct ways of perceiving Twitter in their day-to-day routines emerged, especially during intense news periods:


  • Twitter as an early warning system, which immediately signals breaking news and alerts journalists of big stories. One journalist told me:

    I like the immediacy of Twitter and I like all the things that it’s designed to do. I can get the news that I want from various sources all at the same time, and I get it quickly. I feel like when something big is happening, I know it instantly – 20 minutes before you’re going to get the breaking news or an email from a news organization. It has become a constant in my life. It’s my favorite.

  • Twitter as a digital notebook, which allows journalists to record (in a note-taking style, after all a tweet is only 140 characters) and immediately publish short snippets of information. Not only are journalists able to realize a competitive advantage with this approach in terms of being first to push out content. They are also later able to go back to their Twitter timeline and use its content as the backbone of the actual story they are tasked to produce. One journalist explained:

    I initially did not want to use Twitter at all. I thought I was just going to stay [at my news organization] as a writer on the web and for print.  But my editor at the time said we really need to adapt to social media. Use it as your notebook. And so I have really taken that to heart. A reporter’s notebook traditionally has always been where they keep their notes, their asides, observation, color, different scenes.  And they eventually call from their notebook and put it into a story. Twitter to me is groundbreaking for political reporters especially because you can now share your notebook with the whole world. If I know something is accurate, I’ve seen it with my own eyes, or I’ve heard them say it to me, I put it in my notebook and I have it there. Now I can also put it on Twitter. So it enables me to build a bigger story. Twitter I think is a component of your storytelling as a reporter.

  • Twitter as a modern wire service, which already provides a curated selection of stories. This selection of stories is biased, of course, as it is determined by the users and conversations every journalist chooses to follow.
  • Twitter as a direct line to political elites. Twitter tends to be driven by communities of interest, and the political community on Twitter is particularly strong. Many journalists described just how accessible politicians and their operatives are on the platform, when they are removed from their PR teams and personally share updates (as opposed to exclusively communicating via public statements or press releases). One journalist explained:

    I think for politicians, I’ve seen it especially in the past two or three years, that’s how they like to break news. They are getting a huge amount of political capital from Twitter – however hollow that might be – but they can certainly raise their profile from a single tweet. I think people see that; politicians see that. And it’s almost like a snowball effect. They see someone get some real mileage off a tweet, a series of tweets or an active Twitter presence, and they do the same.

Journalists also can’t be as easily shaken off, when tweeting at a politician is a publicly visible inquiry that demands a response, if only for the sake of accountability. One journalist says he has “definitely used Twitter to reach out to politicians,” but is also careful about the nature of their tweets, explaining that he “tend[s] to lose interest in those that just strictly use it as a PR tool.”

Despite all this talk around managing productivity and optimizing workflows, it was striking to find that only a handful of reporters use content management systems, such as TweetDeck or SocialFlow, that help to keep tabs on relevant topics and accounts of key individuals, especially those of political elites. Many admitted that they “probably should” be using a CMS. Others confessed their fatigue with the constant evolution of technology and scarcity of resources, leading to an unwillingness to learn how to use yet another (possibly short-lived) tool, however useful it might be.


Struggling to switch off

Journalists are naturally drawn to spaces rich with knowledge about current events, and Twitter’s “constant stream of information” makes the platform ever more appealing. News never stops and neither does Twitter. This makes it hard for heavily engaged journalists to switch off. One reporter described the following:

I mean we all feel this way, and it’s not really about journalism at this point, but all of social media – they’re apps, but they’re worlds. They’re platforms, but they’re actual worlds and communities. I think this creates a level of mental noise that might be unhealthy. There are always these conversations happening everywhere. I think the big challenge for me will be knowing when to unplug. So that when I wake up in the middle of the night I’m not reading Twitter.

Another journalist confessed:

I’ve become a little – I mean I hate to say addicted, but it’s just become much like we are. Our phones are always in our hands, or our pockets. I pick up my phone and I go through Twitter. In the middle of the night when I wake up, I go to Twitter. When I wake up in the morning I check Twitter. I find that it’s kind of become my primary news source.

The above accounts link to wider phenomena, such as concerns over work-life balance and the relationship between quantity and quality, as well as questions around the content and context of meaningful engagement. These insights are especially prominent in light of scarce resources and fierce competition in journalism and as concrete outcomes remain rather elusive beyond traditional journalistic practices and routines, both for those who heavily invest into the platform and those who engage selectively.

Image copyright Paul Strickland, used with permission.

How the nature and circumstances of a news event are key in shaping political journalists’ Twitter engagement

E. Vargas, CC BY-SA 2.0

Note: This is the fourth post in the Beyond 140 characters series, which investigates how, why, and under what circumstances political journalists engage with Twitter. This piece shares some of the project’s key findings. The previous post reflected on the role of news organizations, institutional social media policies and economic considerations.

The practice of journalism is highly situational: during key news events journalists follow different rhythms and work patterns than during slow news phases. Findings from the “Beyond 140 characters” project indicate how political journalists’ engagement with Twitter varies between mundane and intense news periods. For example, many journalists apply the value of newsworthiness to Twitter in a similar fashion as they would to legacy media. One journalist described his approach during slow news phases as follows:

I think one Twitter skill for reporters is that – and I’ve had to learn this – when news is not happening, you don’t want to abuse your Twitter following. They’re there to get great information from you, not to know about your life story. They don’t want to know that we’re getting coffee. You don’t want to over-share. You have to have the same respect for your followers on Twitter as newspapers do for their readers. You don’t want to inundate them and you don’t want to give them just information that’s meaningless.

During breaking news events, there is a significant increase in the scope of how journalists take to Twitter. This is twofold: most journalists spend decisively more time on Twitter, but their practices on the platform also become more diverse. For example, one journalist compared an average to an intense news day on Twitter, juxtaposing how much space each takes up in his work day, as well as the different purposes the platform serves depending on how much is going on in his beat:

During a regular day, I sometimes need a little mental break for a couple of minutes. I’ll scroll through Twitter as I kind of think about what I just talked about in an interview, and what I might want to do with that. So I would say… during an average day maybe my total time on Twitter would be half an hour.  But if I’m live tweeting something, an event or meetings I cover, which often go on for seven, eight, ten hours… I wouldn’t be on Twitter without interruption, but visiting multiple times in an hour for hours on end.

Many journalists are readily able to lay out how Twitter is used for a variety of reasons during busy news periods. One described a commonly shared approach and objective:

We use it to break news, we use it to live tweet events, we use it to engage audiences in conversation. We use it to make sure that as many people as possible see the content and the storytelling that we’re doing on whatever event it is.

The nature and circumstances of an event matter

But a breaking news event can encompass a range of different events and stories, and not all news events allow for or facilitate equal journalistic responses and coverage. We know from previous research of Twitter’s utility and status in the context of breaking news. Many journalists agree, to use one individual’s own words, that:

You’re sort of curating your own kind of news feed in a way. And so the downside of it is that you can get lost in that stream of information. And so I think that the challenge really as a journalist is using the platform to gather information that you otherwise would never have gathered.

The vast majority of journalists in my study confirm that Twitter has, whether for active or passive usage, become the go-to medium in breaking news scenarios for first-hand information as they unfold, real-time commentary, eyewitness accounts, and footage from user-generated content. Many journalists shared ample examples of how Twitter emerged as a vital tool in their past coverage of breaking news. One summarized its role as follows, outlining how it overtakes traditional news media during intense and information-sensitive news periods:

If there’s a breaking news event going on I definitely follow Twitter more closely. I follow it to the point of distraction almost, but that’s where you can get the news much quicker, even more quickly than broadcast news.

Opportunities to experiment

But there are other intense news periods that aren’t necessarily breaking stories, despite the fact that they might still be major stories on a national (or even international) scale. Predictable or scheduled news events (think, for example, elections or rulings in high profile court cases) are anticipated and non-spontaneous. The nature and circumstances of such news events are key for journalists on Twitter, as the story is embedded into an existing information ecosystem and offers established channels to pursue it: most, if not all, actors involved are known, thus providing access to legitimate and reliable sources (one of the key challenges in breaking news scenarios, in fact); official statements and press releases are widely accessible; and coverage of the event might have been going on for weeks or months already, providing a narrative backdrop and points of reference that any further coverage can be linked to.

Many journalists continue to approach these (non-breaking) intense news periods with a “business as usual” perspective and appropriate many of Twitter’s socio-technological affordances to fit long-standing journalistic practices (a phenomenon that has been discussed as ‘normalization’ in previous studies). One journalist told me:

If something breaks, we are open to covering it on Twitter. But I’m also kind of in the realm where unless it’s something really big, you’re better served, and your readers are better served, by tweeting out a link to the final story. It’s like instead of just having ten tweets that have one different fact in them and no links to your website, take all those ten facts that you were going to tweet out, put them in a story, and then tweet that link. It seems like that benefits your newspaper better.

A select few take a more creative stance. These individuals are keen to explore the potential that lies beyond simply mapping traditional journalistic routines onto Twitter. A small but considerable number of journalists recognize Twitter offers myriad opportunities to experiment with new journalistic formats. One told me about the wider institutional social media strategy that drives both the news organizations’ and individual journalists’ Twitter practices during intense news periods:

If you take the Democratic Debates that we had in Vegas [in 2015]… I’m really proud of what we’re doing in terms of social first editorial storytelling. We’re trying to do that more and more for political events… We turned a two-hour TV event into a two-day, trending affair across Twitter and Facebook. We used it to try different ways of storytelling, such as Twitter Moment collections, an Instagram movie series, and backstage from the debate we went live on Facebook.

The rationale behind this is simple:

The days of taking what you’ve done on another platform and putting it on social are over.

Gaining or losing a competitive advantage?

Competition is fierce in the news industry, particularly during intense news periods when many national political journalists are likely to be covering the same story. It can be a key moment for both an organization and a journalist to position themselves in the market and use the platform as added value, grabbing the audience’s attention and establishing competitive superiority, as one journalist explained:

To me, those same qualities that make a great reporter in the newspaper or on television apply to Twitter.  But what makes Twitter different is that you can break things on Twitter and really get into the political ecosystem in a way. I mean if I break something on Twitter and it’s a good scoop… in terms of it moves a story forward, brings fresh information. I’d rather break it on Twitter than anywhere else because it bleeds out very quickly. And the news just becomes like wildfire on Twitter. So if you could do that, you could develop a reputation eventually as a reporter, as someone who is really at the front of information. And that’s what you want.

But there are two sides to this reality, which many journalists are grappling with. On the one hand, there are those journalists who deliberately and pro-actively share scoops in order to still the ‘thirst to be first,’ such as the one quoted above. On the other hand, there are those on the receiving end of such tweets, who benefit from the exclusives another journalist may just be handing to them, as one journalist explained:

I think [name of a journalist who works for the competition] on social media does give away too much. I mean, he is very plugged in with certain politicians and we know [about those individuals]. And I can watch his Twitter feed and know about it before he gets anything onto his news organization’s website. He’ll have tipped me off. I don’t want to be like that. He’s great, you know, but he’s helping me.

There is a delicate relationship between the promise of being the first to pick up on and break a news story on Twitter versus the risk of giving away a scoop. After all, to use another journalist’s words, “once it’s on Twitter it’s not new anymore,” and using a competitive advantage in a tweet might sometimes mean losing it for the bigger story.

Having a breakthrough moment

In my interviews, I prompted all journalists to tell me about a particularly significant experience they had on Twitter, given its unprecedented speed and immediacy, and how things can go viral. One journalist, more conservative and less creative than others in his take on Twitter, pondered its short-lived memory and wondered about tangible impact his efforts on the platform produce:

Every day is now like 20 different news cycles. One little development will happen and it will just like ripple through Twitter and then, by the end of the day, it can be totally forgotten.

In the end, the vast majority of memorable experiences that journalists shared—irrespective of whether these were positive or negative—were immediately linked to a particular news event. My data suggests that it is during periods of covering high political activity or breaking news that the aspired and perceived return on journalistic investment into Twitter becomes most visible. One journalist reflected on a key milestone in his career:

You have to have a breakthrough moment on Twitter. Mine was the government shutdown in 2013. I went from probably 10,000 to about 50,000 in two weeks… [I] got a huge following because people wanted to follow it minute by minute. And I was giving them what they wanted with fresh information. And that Twitter explosion and all the scoops I had in 2013 led directly to me getting [my current] job.

Ultimately, Twitter has become a critical tool beyond merely being “another distribution channel”; it allows journalists to place their finger on the pulse of current events.

Exploring the tension between transparency and user experience

Ranking algorithms decide how content is presented, sorting everything from search results and dating matches, to job applicants and news feeds. Algorithm-curated feeds, as used by Facebook and online news media, are receiving critical attention in the wake of the 2016 presidential election because they can promote false or misleading material. It is becoming evident that transparency about how such feeds, and other rankings, are curated is paramount to gaining and maintaining trust in the information presented.

But achieving this is not straightforward. While transparency can increase trust in an algorithm’s output, one study demonstrated that too much transparency can negate this effect. Students receiving peer-reviewed coursework scores were assigned into three groups, with varying levels of transparency around the peer review process. The low transparency group received no additional information, the medium group received a brief explanation, and the high transparency group also saw raw scores and the algorithmic adjustments codeine without prescription. If the student’s grade was lower than expected, medium transparency improved trust in the grading system, but high transparency did not increase trust. Transparency also had no effect on trust when scores were as the student expected. As well as the level of transparency, there is also a concern, as yet untested, that adding a surfeit of transparency features or information to a system will interfere with usability.

To investigate whether this tension exists, and if it does, to quantify its effect, we created a Web tool that visualizes ranks of programming languages—a topic of keen interest to our collaborating publisher, the Institute of Electrical and Electronics Engineers (IEEE) Spectrum, which publishes content for its professional audience of technically savvy engineers. The tool presents a dynamic ranking according to 12 different weighted measures of use or popularity of each language (e.g. how many search results or job postings there are for the language). The tool was published by IEEE, and attracted users with domain knowledge in programming. Users who visited with the tool were then invited to our opt-in survey.


Screen shot of interactive web tool about popularity of programming languages.

Screen shot of interactive web tool about popularity of programming languages.


There were four transparency features incorporated directly into the interface that allowed the user to interact with how the ranking is produced. These included: (1) selecting different pre-set weighting combinations, (2) directly editing weighting or inclusion/exclusion of various data sources to create a custom ranking (e.g. see figure below), (3) visually comparing two rankings with potentially different weightings, and (4) filtering the types of languages that are shown. These features were motivated as a way to enable algorithmic transparency into how the ranking was synthesized and as an entry-point for users to interactively express disagreement with the defaults.



Screenshot of the interactive web tool after clicking “Edit Ranking”


We analyzed survey results from 204 individuals (148 after data cleaning) who voluntarily visited the web tool and voluntarily completed the survey. Data cleaning included standardizing birth dates, removing rows with incorrect inputs like “Brazil” for year of birth, more than one unanswered question, or inconsistent answers such as rating 4 or 5 for “Interesting” and 4 or 5 for “Boring.” The goal was to mitigate the impact of sloppy user responses that might reflect a user not paying close attention to the survey.


Table 1. Surveyed Transparency features

Transparency Features (Independent Variables): “Used”, “Didn’t use”, “Didn’t know it was available”
Select weighting pre-set for ranking
Edit ranking weights or data sources
Compare two rankings
Filter visible languages types

Table 2. Tool survey ratings on a Likert scale for items related to Trust or to User Experience.

Dependent variable: Trust (scale 1: not at all,  5: A lot) Dependent variable: UX (scale 1: not at all,  5: A lot)
Accurate Attractive
Authoritative Clear
Informative Coherent
Objective Easy to use
Relevant Enjoyable
Trustworthy Interesting
“Unbiased” (scores flipped from Biased) Pleasant to use
“Unboring” (scores flipped from Boring)


For each transparency feature, we asked, “Did you interact with X?” to which they could answer “Used,” “Did not use,” or “Did not know it was available” (Table 1). Participants also provided ratings for trust- and user experience (UX)-related words on a Likert scale from 1 (e.g. not at all trustworthy, or not at all easy to use) to 5 (very trustworthy, or very easy to use) for a list of seven trust-related and nine UX-related terms when they were finished with the web tool (Table 2). With these responses, we could then test how transparency affected ratings of trust and UX. For example, for participants who used all transparency features, was trust increased? Was UX decreased?

We also collected information related to the participants, including years of programming experience. We found that years of experience correlated negatively with ratings for “Trustworthy” and “Objective” trust-related words, and the UX-related word “Satisfying.” In other words, people with more domain-specific experience were also more skeptical or uncertain about the information presented, and also had a poorer experience (in terms of satisfaction). This may prove important considering the goal of transparency is to increase trust: It may be that people with more expertise would prefer additional levels of transparency, but additional research would be required to test this idea. The effect on trust appears to be quite robust, as when the trust and UX-related term ratings are averaged together (see below for explanation), correlation with years experience remains for trust terms. The correlation disappears for averaged UX-term ratings, though, so it is possible that the negative relationship between years of experience and ratings for the term “Satisfying” for the tool is spurious.

Many of the trust and UX-related terms may be considered to be similar, and therefore averaging those ratings simplifies our statistics. Factor analysis and Cronbach’s Alpha calculations showed that ratings of trust terms and for the UX terms did exhibit high correspondence, meaning that all terms were very similar to each other. Therefore, rather than performing statistics on each of them separately, the responses for the trust and UX terms were averaged together giving us just one score for trust and one score for UX per participant.


Using transparency feature "Compare Two Rankings" significantly increases ratings for trust and UX. Knowing the "Edit Ranking Weights" transparency feature exists significantly increases UX ratings.

Using transparency feature “Compare Two Rankings” significantly (p<0.05) increases ratings for trust and UX. Knowing the “Edit Ranking Weights” transparency feature exists significantly increases UX ratings.


We then tested for differences in trust and UX ratings between the three levels of interaction (“Used,” “Didn’t use,” “Didn’t know it was available”) with each of the four transparency features. We found that ratings for trust or UX are not dependent upon level of interaction of any of the four transparency features.

We then considered that the level of interaction could be framed in one of two ways: 1) whether a transparency feature is known about (“Used” and “Didn’t use”) vs. not known about (“Didn’t know it was available”), or 2) whether a transparency feature is used (“Used”) vs. not used (“Didn’t use” and “Didn’t Know it was available”). Indeed, we found when we consider only whether a transparency feature was used or not, the “Compare Two Rankings” feature significantly increases both trust in the information and UX. In addition, when we consider only whether a feature was known about, knowing the “Edit Ranking Weights or Data Sources” feature exists significantly enhances UX.

This preliminary analysis suggests providing transparency features can increase trust in information as well as enhance UX, and this approach may be effective regardless of whether the user interacts with the transparency feature or not. Our findings also suggest providing these affordances does not necessarily negatively impact UX. Future work should expand to more participants and center around a tool that does not limit participants to a specific domain. Future experiments might also be more structured, where participants are overtly required to interact or not with specific parts of the tool to avoid self-selection of respondents to the survey.

Study: Automated journalism increases trading in financial markets


In 2014, the Associated Press began using algorithms to write earnings reports covering publicly traded companies. These articles synthesize information from firms’ press releases, analyst reports and stock performance, and are widely disseminated by major news outlets within hours of publication.

“Through automation, we’re providing customers with 12 times the corporate earnings stories as before, including for a lot of very small companies that never received much attention,” said Lisa Gibbs, AP’s global business editor.



Figure 1: AP Earnings Announcement Media Articles over Time. The figure above plots fraction of firms’ earnings announcements receiving an AP reporter-written and automated article, by quarter. The sample includes 4,292 firms and 57,467 earnings announcements.

This year, researchers from Stanford University and the University of Washington  evaluated the role of automated journalism in capital markets. The analysis conducted by professors Elizabeth Blankespoor and Ed deHaan, along with PhD student Christina Zhu, found compelling evidence that AP’s automated articles increase firms’ trading volume and liquidity.

“After the articles are published, we see an increase in trading volume that persists three to four days after the story comes out,” explained deHaan, an accounting professor at the University of Washington.


Figure 2: Abnormal Volume by Days Relative to Earnings Announcement. The figure above plots the final sample firms’ abnormal trading volume for five days before and after the earnings announcement, separated by companies that began receiving coverage (“treatment” firms) and those that hadn’t (“non-treatment” firms). The sample includes 2,268 firms and 29,821 earnings announcements.


What is the role of the media industry when it comes to investing?

The media contribute to more informed and efficient financial markets by conducting analysis, uncovering corruption and holding executives accountable. Beyond that, news organizations relay facts from public accounting reports to the public through a vast distribution network.

This study found a positive effect between the public dissemination of objective information and market efficiency, a major discovery for the implications of automated journalism on capital markets.

“It’s an exciting first step in what is possible with automation technology,” Blankespoor said. “It’s not about displacing journalists from their jobs — it’s about providing coverage for firms that were not previously in the news.”

What was the study’s methodology?

The researchers focused on firms that did not have AP articles written about their earnings announcements from 2012 to 2014, before automated coverage started. Within that group, they separated and compared companies that began receiving reports and those that hadn’t.

Blankespoor said that when the researchers controlled for other factors, they found the change in abnormal trading volume and depth was more positive for firms that began receiving coverage than those that hadn’t, “suggesting that automated coverage increases firms’ trading and liquidity around their earnings announcements.”

Learn more Automated Journalism in this guide published by the Tow Center.

Francesco Marconi is the manager of strategy and corporate development at The Associated Press and an Innovation Fellow at the Tow Center for Digital Journalism at Columbia University.