Past Events

In Defense of Leaks: Jill Abramson at the Tow Center

0

Jill Abramson spoke at the Columbia Journalism School last week on the topic “In Defense of Leaks”  as the final lecture of the Tow Center’s Journalism After Snowden Series.  The Series began with an inaugural panel last January with Jill Abramson, Janine Gibson, David Schulz and Cass Sunstein, moderated by Tow Center Director Emily Bell.

Abramson, the former executive editor of the New York Times, began her talk by contextualizing the Obama administration’s record on press freedom.  She quoted Gabriel Schoenfeld, a conservative critic who recently said, “Ironically, Obama has presided over the most draconian crackdown on leaks in our history.”  The current administration has exerted far more control than that of George W. Bush—President Obama has pursued eight leak criminal cases, more than double the number in all prior administrations.  One close aide in the Obama White House told Abramson, “Obama hates all leaks. He likes things to be tidy. And he won’t tolerate leaks of classified information.”

Abramson was managing editor of the New York Times during the release of the Wikileaks Cables and was Executive Editor of the Times when the Snowden documents were published, giving her a unique vantage point.  She described her position as a “front-row seat to politics and journalistic decisions that were involved in many of these cases.” She offered both a philosophical take on the importance of leaks and the role of journalists as independent watchdogs of democracy, and on the practical considerations of making difficult calls on national security stories.

During the Bush Administration, Abramson was involved in half a dozen cases where the White House asked the Times not to publish a story. In all but one case, the Times published the information, but withheld delicate information. Abramson underscored that these decisions are always incredibly difficult, but was unequivocal about her stance. “I’ve come to believe that unless lives are explicitly in danger, such as during wartime, when you might be disclosing things that could endanger troops or involving putting people who are under cover in danger, almost all of these stories should be brought out in public, except in certain circumstances.”

A salient moment of the lecture was Abramson’s re-telling of the decision to publish a particular national security story.  In 2011, The director of national intelligence asked Abramson to hold a story that was to be published the next day about a telephone intercept of a well-known terrorist leader.  She was told that many U.S. embassies abroad had been emptied out, and that the government believed an attack might be carried out if the story was published.  The director of national intelligence at the time told Abramson, “If you publish this story you will have blood on your hands.”

“I had heard these same words from the Bush administration. Significantly aiding Al-Qaeda. When the president or the director of National Intelligence says that any editor takes this very seriously.”  The NYTimes altered the story, removing the names of those whose conversations were being intercepted.

“Our careful deliberations were beside the point—the next morning McClatchy published the story, names and all.  McClatchy took a different posture from the Times. McClathchy didn’t feel it was important to call the U.S. government for input on these cases.”  While Abramson insisted that for some stories, it is important to work closely with the White House, and Intelligence community, she also deliberated: “In retrospect, I actually think McClatchy made the right call.”

Implicit in Abramson’s re-telling of these stories was skepticism about the government’s behavior during both the Bush administration and the Obama administration, and the feeling that national security had too often been used to silence journalists unnecessarily.

 

She defended the patriotism of journalists: “We are actually patriots, too” and repeatedly expressed regret about withholding certain stories and was critical of the immense growth of the surveillance state: “I think we [journalists] have been too meek.”

Much of the lecture seemed like an act of journalism in itself: the re-telling of stories about the NYTimes newsroom in a post-9/11 climate. The stories were personal – about the decisions and actions of individual reporters and editors—to publish or not, store or destroy drives, collaborate or set a distance from government institutions. These stories are valuable for young students of journalism who can expect to be exposed to an increasing number of national security stories in years to come and may face similar questions throughout their careers. For example, Abramson described receiving a call from Alan Rusbridger, the executive editor of the Guardian, asking the Times to keep a copy the Snowden cache. This was a unique moment when somewhat competing publications collaborated across the pond to store and protect information they believed to be in the public interest.

And yet, Abramson expressed disappointment about the way the NSA story has been reported since the initial newsbreak. She took particular note of the media’s seeming lack of interest in the NSA’s collaboration with the Israeli government.

Abramson also commented on the current case of James Risen, the New York Times reporter.   “He has been a thorn in the side of the Government for years and I am proud of him for that.”  The Department of Justice will decide this Tuesday  whether they will subpoena James Risen.

You can watch the entirety of Jill Abramson’s Lecture here.

The Journalism After Snowden series, supported by The Tow Foundation and The John S. and James L. Knight Foundation, has included lectures by David Sanger, Steve Coll, Ethan Zuckerman, James Bamford, and Jill Abramson, in collaboration with the Yale Information Society Project.

The Tow Center will continue this conversation with a panel on National Security Reporting in the Age of Surveillance with Dean Baquet, the Executive Editor of the New York Times, Marty Barron, the Executive Editor of the Washington Post, and Susan Glasser of Politico.  This closing event will be at the Newseum in Washington D.C. on February 5th, 2015.  RSVP here.

The Journalism After Snowden project has also included Digital Security Workshops for graduate students of journalism.

 

Past Events

The Future of BuzzFeed: Notes on a Tow Tea

0

by Andrea Larson

BuzzFeed got its start as a tech company. From the beginning they’ve been actively creating and revamping algorithms to predict the infectiousness of content. It wasn’t surprising to hear Ky Harlin, Director of Data Science at BuzzFeed, comparing the virality measures used at BuzzFeed to the formula used to measure the Basic Reproduction Rate of infectious diseases.  The formulas are close to identical.

Harlin and Samir Mezrahi, head of Social Media and New Platforms, both have backgrounds rooted in math. Harlin was an engineering student at Columbia and worked for a biotech start­-up prior to landing the gig at BuzzFeed. Mezrahi worked for an accounting firm in Oklahoma until he decided that number crunching wasn’t doing it for him anymore. They both spoke last Thursday about data’s role before and after the publication of content. They are constantly trying to determine which pieces of content do well and why in addition to which algorithms make for poor predictors. If BuzzFeed is to maintain its ability to predict which posts will go viral its dedication to algorithms is a necessity.

Three years ago Ben Smith of *Politico* was hired as editor­-in-­chief. According to BuzzFeed founder Jonah Peretti, Smith’s arrival completely changed how content is posted and what it means to post content on BuzzFeed’s site. Smith and Peretti’s recent public response regarding BuzzFeed’s obliteration of over 4,000 of its own posts indicates that the content giant’s interests have turned towards more traditional journalistic practices.

With all of the in­-house renovations I wondered if the data science department had been bypassed. Would the focus of their team shift from measuring the contagiousness of posts to creating newsworthy content? I had the opportunity to speak with Ky and Samir after the presentation. They told me BuzzFeed’s recent editorial divisions comprised of three sections: News, Life and BuzzTeam. BuzzFeed hopes that the new split will allow for news diversification and a greater number of in­-depth stories. Each sector has a different set of responsibilities and team members with differing strengths and backgrounds. Since the great divide the relationship between the Data Science team and the editorial teams at BuzzFeed has evolved.  Harlin said that his team of ten can be likened to a miniature consulting firm. They are continually crafting new solutions to in-­house problems and inventing ways to make posts that content consumers will want to share.

Harlin’s description of the spread of content by platform, the use of machine learning to predict social hits and his posit that interpreting algorithms is of greater importance than making them left me wondering what Buzzfeed’s future will look like.  I think that BuzzFeed has the ability to create well­ crafted, insightful content time and time again.

Andrea Larson is a data journalist currently studying in Columbia’s Lede program.

Announcements, Past Events

Upcoming Events: Journalism After Snowden

0

Journalism After Snowden Closing Event

On February 5th, 2015, the Tow Center will be hosting an event on National Security Reporting in the Age of Surveillance: A Conversation About Reporting Post-Snowden with Dean Baquet of the New York Times, Marty Baron of the Washington Post, Susan Glasser of Politico, and Steve Coll, of Columbia Journalism School.  The event will be at the Knight Conference Center at the Newseum in Washington, D.C. and will include the launch of a new Pew study.

More information about RSVP to come

1606909_791412234264266_7194605208538394787_n

Past Events

Buzzfeed, Data, and the Future of Journalism: Reflections on a Recent Tow Tea

0

By Ilia Blinderman

As a young journalist, it’s often difficult to maintain a positive outlook when considering the future of the industry. And yet, if one were to pick an outlet likely to weather the storm of ambiguity facing today’s media, Buzzfeed would make for an unusually safe bet. Certainly, whatever your opinion of the former listicle clearing house’s journalistic forays (many of which have been unqualified successes, despite the company’s issues of shedding its digital variety act-image and communicating a new, more rigorous persona), Buzzfeed is doing very, very well. As I learned at an early December Tow Tea, the staggering number of hits the site generates is in no small part due to the efforts of its data science team.

In fact, Ky Harlin, Buzzfeed’s Director of Data Science, alongside Samir Mezrahi, a longtime Buzzfeed reporter, noted that data science has played an integral role in the company for many years. The results of Harlin’s A/B tests, where some users see one variant of a post, and others see another, are unequivocal. Buzzfeed editors have learned to tweak a post’s headlines, photo types, and list item order to maximize the virality of the content, which, in turn, varies by social network. These rules, according to Harlin, do not indicate editorial direction, but rather furnish journalists with a set of new media best practices which dovetail with editorial judgement.

Harlin and Mezrahi were thoughtful, unassumingly confident, and analytically savvy, none of which surprised the members of the media; most were aware of Buzzfeed’s meteoric rise and remarkable performance. It merits mention, however, that at the precise hour that the pair spoke to a packed audience of journalists, students, and researchers, the public was learning that one of America’s most revered magazines was undergoing a violent upheaval. The New Republic, which had found an uncanny balance between the cerebral and the comprehensible, was being transplanted from Washington, D.C., to New York; Franklin Foer, its editor, and Leon Wieseltier, who had helmed its cultural coverage for over three decades, were out. Writer and critic Walter Kirn was, perhaps, most eloquent in his reaction: “Franklin Foer is leaving [The New Republic] along with Leon Wieseltier, which is like the soul leaving the body.” The future of high-quality long-form journalism — which Chris Hughes, The New Republic’s owner, told readers he sought to ensure — was once more in perilous straits.

The question to ask at this juncture is not whether data science has the answer for Buzzfeed — in the site’s earlier incarnation, the response is a resounding yes. More interesting, and orders of magnitude more challenging, will be its role in shoring up Buzzfeed’s longer, more thought-provoking content. I can only hope for the best for The New Republic and its staff, both former and current, but can only speculate, albeit optimistically, about its new direction. Buzzfeed, meanwhile, is making laudable strides towards becoming a respected source of long-form reporting. Whether data science can catapult it to the forefront of outlets producing rigorous long-form content remains to be seen.

Ilia Blinderman is a data journalist and writes about culture and science.  He is studying in Columbia’s Lede program.  Follow him on twitter at @iliablinderman

 

Past Events

Ta-Nehisi Coates at the Tow Center

0

By Alexandria Neason

Two weeks ago, students, alumni, faculty, and others packed the lecture hall at Columbia Journalism School to see The Atlantic’s Ta-Nehisi Coates, perhaps America’s foremost writer on race, speak about the media’s often contentious relationship with it.  Lately, the news has been more saturated with conversations about race than I can ever remember. Coverage of the policing and killing of black boys and men, of the burden of raising black children, of reparations and so-called “post-racial” multiculturalism has brought to the mainstream what people of color have long known.  America (still) has a race problem.

Photo by Rhon Flatts
Photo by Rhon Flatts

When I think about the history of American journalistic writing on race, it is difficult to separate the writing from the activism that it often highlights.  It’s hard to imagine white, Northern journalists traveling to the unapologetically violent 1960’s era South without some fledging belief that what was happening was wrong. It is hard to imagine that a journalist could cover the killings of Sean Bell, of Trayvon Martin and Oscar Grant and Jordan Davis and Mike Brown and all the others – without an understanding that something, somewhere, is deeply wrong.

I’d come to hear Coates speak because I wanted to know how he did it.  I wanted to know how he confronted the on-going legacy of American racism every day on his blog, subject to anonymous “keyboard commanders” as he referred to them.  I wanted to know how he dealt with what I can only assume were vile emails in response to his June cover story on the case for African-American reparations. I wanted to know how he wrote about racist housing policies and the constant loss of young, black life, without becoming disempowered by lack of change.  I’d come to hear how he kept going even when the idealisms of journalism – to affect change – proved elusive.

Coates talked at length about how he got his start in journalism. He spoke about dropping out of Howard University, and about his disinterest in romanticizing that.  He spoke about the importance of learning to report and to write.  He spoke of the difference between the two.  He spoke about waking up with aching questions and going to bed still bothered by them.  He spoke about reading, about writing constantly (even if it’s bad) as a means of practicing.  He talked about the absolute need to practice.  He told us that writing needs to be a top priority, below only family and health, if we hope to make a career out of it. He told us that if we didn’t love it, to leave.  “It’s just too hard,” he said.

And then he contradicted much of what I’d been taught about journalism. He told us not to expect to change anything with our writing.

I was startled. Wasn’t that the point? To educate, to inform, and ultimately, to change?

No. Coates doesn’t write to change the world. He doesn’t write to change the minds of white people (and he warned both white writers and writers of color of the dangers in doing this). “Don’t write to convince white people,” he said. I found in that statement what is perhaps the advice I needed in order to keep writing.

For a black man, writing about race in a country hard-pressed to ignore its long marriage to it, and doing so with precision and integrity and without apology is an act of defiance in and of itself.  Writing to speak, unburdening oneself of the responsibility of educating your opponents (and, in doing so, inadvertently educating a great deal of people), is how you keep touching on the untouchable subjects.

After the lecture, a small group of students gathered with Coates in the Brown Institute for a writing workshop sponsored by the Tow Center for Digital Journalism.  We were treated to a detailed walkthrough of Coates explosive June cover story on reparations for African-American descendants of slaves.  We learned about how he began his research, how he stayed organized, how he developed his argument and how it evolved over the year and a half that he worked on the piece.  It was clear that he was quite proud of it, not because he changed the minds of readers or because it had drawn so much attention to an issue often brushed off as impossible, but because he’d buried himself in the research, because he’d found a way to put a living, human face on the after-effects of policies that we often discuss as though they have none.  The piece is heavily reliant on data, but littered with human faces, human stories, human consequences.  It is deeply moving.  To me, it was convincing. Overwhelmingly, undeniably convincing.  And yet, his motivation was not to convince me, or anyone else, of anything.  He wrote to speak.

And speak he did.

Alexandria Neason is an education writer for the Teacher Project at Columbia Journalism School and a graduate of the M.S. in Journalism in 2014.

Past Events

Journalism After Snowden Lecture Series

0

Journalism After Snowden Lecture Series

Presented by: Columbia University Graduate School of Journalism and the Information Society Project at Yale Law School

 

September 29, 2014

6:00pm-7:30pm, with reception to follow

Brown Institute for Media Innovation at Columbia University Graduate School of Journalism

 

Source Protection: Rescuing a Privilege Under Attack

Speaker: David A. Schulz, Partner at Levine Sullivan Koch & Schulz, LLP

Moderator: Emily Bell, Director of the Tow Center for Digital Journalism

 

Watch Full Lecture 

David Schulz | Outside Counsel to The Guardian; Lecturer, Columbia Law School; Partner, Levine Sullivan Koch & Schulz LLP | @LSKSDave
David Schulz heads the New York office of Levine Sullivan Koch & Schulz, L.L.P. a leading media law firm with a national practice focused exclusively on the representation of news and entertainment organizations in defamation, privacy, newsgathering, access, copyright, trademark and related First Amendment matters. Schulz has been defending the rights of journalists and news organizations for nearly 30 years, litigating in the trial courts of more than 20 states, and regularly representing news organizations on appeals before both state and federal tribunals. Schulz successfully prosecuted access litigation by the Hartford Courant to compel the disclosure of sealed dockets in cases being secretly litigated in Connecticut’s state courts, and the challenge by 17 media organizations to the closure of jury selection in the Martha Stewart criminal prosecution. He successfully defended against invasion of privacy claims brought by Navy SEALS whose photos with injured Iraqi prisoners were discovered on-line by a reporter, and has prevailed in Freedom of Information Act litigation pursued by the Associated Press to compel the release of files relating to detainees held by the Department of Defense at Guantanamo Bay and to records of the military service of President George W. Bush. Schulz is described as an “incredibly skilled” litigation strategist and a “walking encyclopedia” of media law by Chambers USA (Chambers & Partners, 2006), and is recognized as one of the nation’s premier First Amendment lawyers by The Best Lawyers in America (Woodward/White, 2006). He regularly represents a broad range of media clients, including The New York Times, Associated Press, CBS Broadcasting, Tribune Company, and The Hearst Corporation, along with other national and local newspapers, television networks and station owners, cable news networks, and Internet content providers. Schulz is the author of numerous articles and reports, including Policing Privacy, 2007 MLRC Bulletin 25 (September 2007); Judicial Regulation of the Press? Revisiting the Limited Jurisdiction of Federal Courts and the Scope of Constitutional Protection for Newsgathering, 2002 MLRC Bulletin 121 (April 2002); Newsgathering as a Protected Activity, in Freedom of Information and Freedom of Expression: Essays in Honour of Sir David William (J. Beatson & Y. Cripps eds., Oxford University Press 2000); and Tortious Interference: The Limits of Common Law Liability for Newsgathering, 4 Wm. & Mary Law Bill Rts. J. 1027 (1996) (with S. Baron and H. Lane). He received a B.A. from Knox College in Galesburg, Illinois, where he has served for more than twenty years on the Board of Trustees. He received his law degree from Yale Law School, and holds a master’s degree in economics from Yale University.

 

 

Announcements, Past Events, Research

Sensors and Certification

0

This is a guest post from Lily Bui, a sensor journalism researcher from MIT’s Comparative Media Studies program.

On October 20, 2014, Creative Commons Science convened a workshop involving open hardware/software developers, lawyers, funders, researchers, entrepreneurs, and grassroots science activists around a discussion about the certification of open sensors.

To clarify some terminology, a sensor can either be closed or open. Whereas closed technologies are constrained by an explicitly stated intended use and design (e.g., an arsenic sensor you buy at Home Depot), open technologies are intended for modification and not restricted to a particular use or environment (e.g., a sensor you can build at home based on a schematic you find online).

Over the course of the workshop, attendees listened to sessions led by practitioners who are actively thinking about whether and how a certification process for open hardware might mitigate some of the tensions that have arisen within the field, namely around the reliability of open sensor tools and the current challenges of open licensing. As we may gather from the Tow Center’s Sensors and Journalism report, these tensions become especially relevant to newsrooms thinking of adapting open sensors for collecting data in support of journalistic inquiry. Anxieties about data provenance, sensor calibration, and best practices on reporting sensor data also permeate this discussion. This workshop provided a space to begin articulating the needs required for sensor journalism to move forward.

Below, I’ve highlighted the key points of discussion around open sensor certification, especially as they relate to the evolution of sensor journalism.

Challenges of Open Sensors

How, when, and why do we trust a sensor? For example, when we use a thermometer, do we think about how well or often it has been tested, who manufactured it, or what standards were used to calibrate it? Most of the time, the answer is no. The division of labor that brings the thermometer to you is mostly invisible, yet you inherently trust that the reading it gives is an accurate reflection of what you seek to measure. So, what is it that instantiates this automatic trust, and what needs to happen around open sensors for people to likewise have confidence in them?

At the workshop, Sonaar Luthra of Water Canary led a session about the complexities and challenges that accompany open sensors today. Most concerns revolve around accuracy, both of the sensor itself and the data it produces. One reason for this is that the manufacture and integration of sensors are separate processes (that is to say, for example, InvenSense manufactures an accelerometer and Apple integrates it into the iPhone). Similarly, within the open source community, the development and design of sensors and their software are often separate processes from an end user’s assembly—a person looks up the open schematic online, buys the necessary parts, and builds it at home. This division of labor erodes the boundaries between hardware, software, and data, obviating a need to recast how trust is established in sensor-based data.

For journalists, a chief concern around sensor data is ensuring, with some degree of confidence, that the data collected from the sensor is not erroneous and won’t add misinformation to the public sphere if published. Of course, this entirely depends on how and why the sensor is being used. If we think of accuracy as a continuum, then the degree of accuracy can vary depending on the context. If the intent is to gather a lot of data and look at general trends—as was the case with the Air Quality Egg, an open sensor that measures air quality—point-by-point accuracy is less of a concern when engagement is the end goal. However, different purposes and paradigms require different metrics. In the case of StreetBump, a mobile app that uses accelerometer data to help identify potential potholes, accuracy is a much more salient issue as direct intervention from the city would mean allocating resources and labor toward location data a sensor suggests. Thus, creating a model to work toward shared parameters, metrics, resources, and methods might be useful to generate consensus and alleviate factors that threaten data integrity.

There may also be alternative methods for verification and accounting for known biases in sensor data. Ushahidi’s Crowdmap is an open platform used internationally to crowdsource crisis information. The reports depend on a verification system from other users for an assessment of accuracy. One can imagine a similar system for sensor data, pre-publication or even in real time. Also, if a sensor has a known bias in a certain direction, it’s also possible to compare data against an established standard (e.g., EPA data) and account for the bias in reporting on the data.

To further investigate these questions, we can look toward extant models of verification in open science and technology communities. The Open Geospatial Consortium provides a way of thinking about interoperability among sensors, which requires that a consensus around standards or metrics be established. Alternatively, the Open Sensor Platform suggests ways of thinking about data acquisition, communication, and interpretation across various sensor platforms.

Challenges of Open Licensing for Sensors

A handful of licensing options exist for open hardware, including the CERN Open Hardware License, Open Compute Project License, and Solderpad License. Other intellectual property strategies include copyright (which can be easily circumvented and is sometimes questionable when it comes to circuits), patenting (which is difficult and costly to attain), and trademark (an option that offers a lower barrier to entry and would best meet the needs of open source approaches). However, the issue of whether or not formal licensing should be applied to open hardware is still questionable, as it would inevitably impose restrictions on a design or version of hardware that—within the realm of open source—is still susceptible to modification by the original developer or the open source community writ large. In other words, a licensing or certification process would transition what is now an ongoing project into a final product.

Also, in contrast to open software, wherein the use of open code is clearly demarcated and tracked by the process of copying and pasting, it is less clear at what point a user actually agrees to using open hardware (i.e., upon purchase or assembly, etc.) since designs often involve a multitude of components and are sometimes accompanied by companion software.

A few different approaches to assessing open sensors emerged during the workshop:

  1. Standards. A collaborative body establishes interoperable standards among open sensors, working for independent but overlapping efforts. (Targeted toward the sensor.)
  2. Certification/Licensing. A central body controls a standard, facilitates testing, and manages intellectual property. (Targeted toward the sensor.)
  3. Code of conduct. There exists a suggestion of uses and contexts for the sensor, i.e., how to use it and how not to use it. (Targeted toward people using the sensor.)
  4. Peer assessment. Self-defined communities test and provide feedback on sensors as seen in the Public Lab model. (Targeted toward the sensor but facilitated by people using it.)

In the case of journalism, methods of standardization would depend on how much (or little) granularity of data is necessary to effectively tell a story. In the long run, it may be that the means of assessing a sensor will be largely contextual, creating a need to develop a multiplicity of models for these methods.

Preliminary Conclusions

While there is certainly interest from newsrooms and individual journalists in engaging with sensor tools as a valid means for collecting data about their environments, it is not yet apparent what newsrooms and journalists expect from open sensors and for which contexts open sensor data is most appropriate. The products of this workshop are relevant to evaluating what standards—if any—might be necessary to establish before sensors can be more widely adapted into newsrooms.

In the future, we must keep some important questions in mind: What matters most to newsrooms and journalists when it comes to trusting, selecting, and using a sensor tool for reporting? Which sensor assessment models would be most useful, and in which context(s)?

With regard to the certification of open sensors, it would behoove all stakeholders—sensor journalists included—to determine a way to move the discourse forward.

References

  1. Pitt, Sensors and Journalism, Tow Center for Digital Journalism, May 2014.
  2. Bourbakis and A. Pantelopoulos, “A Survey on Wearable Sensor-based Systems for Health Monitoring and Prognosis,” Systems, Man, and Cybernetics, Part C: Applications and Reviews, Vol. 40, Iss. 1 (IEEE, Jan. 2010).
  3. Open Source Hardware Association (OSHWA), Definition page.