Past Events

In Defense of Leaks: Jill Abramson at the Tow Center

0

Jill Abramson spoke at the Columbia Journalism School last week on the topic “In Defense of Leaks”  as the final lecture of the Tow Center’s Journalism After Snowden Series.  The Series began with an inaugural panel last January with Jill Abramson, Janine Gibson, David Schulz and Cass Sunstein, moderated by Tow Center Director Emily Bell.

Abramson, the former executive editor of the New York Times, began her talk by contextualizing the Obama administration’s record on press freedom.  She quoted Gabriel Schoenfeld, a conservative critic who recently said, “Ironically, Obama has presided over the most draconian crackdown on leaks in our history.”  The current administration has exerted far more control than that of George W. Bush—President Obama has pursued eight leak criminal cases, more than double the number in all prior administrations.  One close aide in the Obama White House told Abramson, “Obama hates all leaks. He likes things to be tidy. And he won’t tolerate leaks of classified information.”

Abramson was managing editor of the New York Times during the release of the Wikileaks Cables and was Executive Editor of the Times when the Snowden documents were published, giving her a unique vantage point.  She described her position as a “front-row seat to politics and journalistic decisions that were involved in many of these cases.” She offered both a philosophical take on the importance of leaks and the role of journalists as independent watchdogs of democracy, and on the practical considerations of making difficult calls on national security stories.

During the Bush Administration, Abramson was involved in half a dozen cases where the White House asked the Times not to publish a story. In all but one case, the Times published the information, but withheld delicate information. Abramson underscored that these decisions are always incredibly difficult, but was unequivocal about her stance. “I’ve come to believe that unless lives are explicitly in danger, such as during wartime, when you might be disclosing things that could endanger troops or involving putting people who are under cover in danger, almost all of these stories should be brought out in public, except in certain circumstances.”

A salient moment of the lecture was Abramson’s re-telling of the decision to publish a particular national security story.  In 2011, The director of national intelligence asked Abramson to hold a story that was to be published the next day about a telephone intercept of a well-known terrorist leader.  She was told that many U.S. embassies abroad had been emptied out, and that the government believed an attack might be carried out if the story was published.  The director of national intelligence at the time told Abramson, “If you publish this story you will have blood on your hands.”

“I had heard these same words from the Bush administration. Significantly aiding Al-Qaeda. When the president or the director of National Intelligence says that any editor takes this very seriously.”  The NYTimes altered the story, removing the names of those whose conversations were being intercepted.

“Our careful deliberations were beside the point—the next morning McClatchy published the story, names and all.  McClatchy took a different posture from the Times. McClathchy didn’t feel it was important to call the U.S. government for input on these cases.”  While Abramson insisted that for some stories, it is important to work closely with the White House, and Intelligence community, she also deliberated: “In retrospect, I actually think McClatchy made the right call.”

Implicit in Abramson’s re-telling of these stories was skepticism about the government’s behavior during both the Bush administration and the Obama administration, and the feeling that national security had too often been used to silence journalists unnecessarily.

 

She defended the patriotism of journalists: “We are actually patriots, too” and repeatedly expressed regret about withholding certain stories and was critical of the immense growth of the surveillance state: “I think we [journalists] have been too meek.”

Much of the lecture seemed like an act of journalism in itself: the re-telling of stories about the NYTimes newsroom in a post-9/11 climate. The stories were personal – about the decisions and actions of individual reporters and editors—to publish or not, store or destroy drives, collaborate or set a distance from government institutions. These stories are valuable for young students of journalism who can expect to be exposed to an increasing number of national security stories in years to come and may face similar questions throughout their careers. For example, Abramson described receiving a call from Alan Rusbridger, the executive editor of the Guardian, asking the Times to keep a copy the Snowden cache. This was a unique moment when somewhat competing publications collaborated across the pond to store and protect information they believed to be in the public interest.

And yet, Abramson expressed disappointment about the way the NSA story has been reported since the initial newsbreak. She took particular note of the media’s seeming lack of interest in the NSA’s collaboration with the Israeli government.

Abramson also commented on the current case of James Risen, the New York Times reporter.   “He has been a thorn in the side of the Government for years and I am proud of him for that.”  The Department of Justice will decide this Tuesday  whether they will subpoena James Risen.

You can watch the entirety of Jill Abramson’s Lecture here.

The Journalism After Snowden series, supported by The Tow Foundation and The John S. and James L. Knight Foundation, has included lectures by David Sanger, Steve Coll, Ethan Zuckerman, James Bamford, and Jill Abramson, in collaboration with the Yale Information Society Project.

The Tow Center will continue this conversation with a panel on National Security Reporting in the Age of Surveillance with Dean Baquet, the Executive Editor of the New York Times, Marty Barron, the Executive Editor of the Washington Post, and Susan Glasser of Politico.  This closing event will be at the Newseum in Washington D.C. on February 5th, 2015.  RSVP here.

The Journalism After Snowden project has also included Digital Security Workshops for graduate students of journalism.

 

Past Events

The Future of BuzzFeed: Notes on a Tow Tea

0

This is a guest post by Lede student Andrea Larson

BuzzFeed got its start as a tech company. From the beginning they’ve been actively creating and revamping algorithms to predict the infectiousness of content. It wasn’t surprising to hear Ky Harlin, Director of Data Science at BuzzFeed, comparing the virality measures used at BuzzFeed to the formula used to measure the Basic Reproduction Rate of infectious diseases.  The formulas are close to identical.

Harlin and Samir Mezrahi, head of Social Media and New Platforms, both have backgrounds rooted in math. Harlin was an engineering student at Columbia and worked for a biotech start­-up prior to landing the gig at BuzzFeed. Mezrahi worked for an accounting firm in Oklahoma until he decided that number crunching wasn’t doing it for him anymore. They both spoke last Thursday about data’s role before and after the publication of content. They are constantly trying to determine which pieces of content do well and why in addition to which algorithms make for poor predictors. If BuzzFeed is to maintain its ability to predict which posts will go viral its dedication to algorithms is a necessity.

Three years ago Ben Smith of *Politico* was hired as editor­-in-­chief. According to BuzzFeed founder Jonah Peretti, Smith’s arrival completely changed how content is posted and what it means to post content on BuzzFeed’s site. Smith and Peretti’s recent public response regarding BuzzFeed’s obliteration of over 4,000 of its own posts indicates that the content giant’s interests have turned towards more traditional journalistic practices.

With all of the in­-house renovations I wondered if the data science department had been bypassed. Would the focus of their team shift from measuring the contagiousness of posts to creating newsworthy content? I had the opportunity to speak with Ky and Samir after the presentation. They told me BuzzFeed’s recent editorial divisions comprised of three sections: News, Life and BuzzTeam. BuzzFeed hopes that the new split will allow for news diversification and a greater number of in­-depth stories. Each sector has a different set of responsibilities and team members with differing strengths and backgrounds. Since the great divide the relationship between the Data Science team and the editorial teams at BuzzFeed has evolved.  Harlin said that his team of ten can be likened to a miniature consulting firm. They are continually crafting new solutions to in-­house problems and inventing ways to make posts that content consumers will want to share.

Harlin’s description of the spread of content by platform, the use of machine learning to predict social hits and his posit that interpreting algorithms is of greater importance than making them left me wondering what Buzzfeed’s future will look like.  I think that BuzzFeed has the ability to create well­ crafted, insightful content time and time again.

Andrea Larson is a data journalist currently studying in Columbia’s Lede program.

Research

New Tow Project: Virtual Reality for Journalism

0

Long a figment of technophile imagination, a confluence of technological advances has finally placed in-home virtual reality on the cusp of mainstream adoption. Media attention and developer interest have surged, powered by the release of the Oculus Rift to developers, the anticipated launch of Samsung’s Gear VR, rumored headsets from Sony and Apple, and a cheeky intervention from Google called Cardboard; a simple VR player made of cardboard, Velcro, magnets, a rubber band, two biconvex lenses and a smartphone.

We now have the computational power, screen resolution and refresh rate to play VR in a small and inexpensive headset. And within a year, VR will be a commercial reality. We know that users will be able to play video games, sit court-side at a basketball game and view porn. But what about watching the news or a documentary? What is the potential for journalism in virtual reality?

Virtual reality is, of course, not new. A generation of media and tech researchers used both cumbersome headsets or VR ‘caves’ to experiment with virtual environments. Research focused mostly on how humans engage with virtual environments when the mind tricked them into thinking they are real.  Do we learn, care, empathize and fear as we do in real life?  Do we feel more?  This research is tremendously important as we enter a new VR age; out of the lab and into peoples’ homes.

In addition to the headsets, a second technology is set to transform the VR experience. While initial uses of the Oculus Rift and similar devices have focused on computer generated imagery (gaming) and static 360° images (such as Google Street View), new experimental cameras are able to capture live motion 360° and 3D virtual reality footage.

The kit is made from 12-16 cameras mounted to a 3D printed brace, and then stitched onto a virtual sphere to form a 360 degree virtual environment. While 360 cameras have been around for years, these new kits are also stereoscopic, adding depth of field. They are not yet commercially available, but several are in production, including one by startup Jaunt and another by NextVR that uses six extremely high resolution Red Epic Dragon cameras. We are working with the media production company Secret Location who have also built a prototype, pictured below.

This new camera technology opens up a tremendous opportunity for journalists to immerse audiences in their story and for audiences to experience and connect to journalism in powerful new ways. And this is the focus of a new Tow Center Research project studying and prototyping live motion virtual reality journalism

The project is a partnership between Frontline, The Secret Location, and the Tow Center.  James Milward, the CEO of the Secret Location is leading the production, Raney Aronson, the Deputy Executive Editor of Frontline is leading the field experiment and shoot, Dan Edge is taking the camera into the field, and I am leading the research.  Together, along with Pietro Galliano, Sarah Moughty and Fergus Pitt, we will be running the demo project and authoring a Tow Brief to be published in partnership with MIT documenting the process and lessons learned.

The project recently won a Knight Foundation Prototype Grant.

In short, this project explores the extension of factual film making onto this new platform. Unlike other journalistic VR work, such as the pioneering project by Nonny de la Pena, which has relied on computer-generated graphics, this project will be centered on live video, delivering an experience that feels more like documentary and photo journalism than a console game. There are few examples of this type of journalism. The one that comes closest would Gannett’s recent project for the Oculus Rift called Harvest of Change.

The first phase of the Tow Center VR project has several components.

First, we are testing the equipment needed to capture live motion virtual reality footage.  This includes a prototype 360/3D camera and surround sound audio recording. We recently held a training session for the camera at the Secret Location Toronto office.

 

Twelve Go-Pros mounted in a 3D printed brace.

Twelve GoPros mounted in a 3D printed brace.

The 360° stereoscopic camera, with directional microphone.

The 360° stereoscopic camera, with directional microphone.

Second, we are deploying this video and audio equipment to the field on a story about the Ebola outbreak being directed for Frontline by Dan Edge, a renowned documentary film-maker. This phase will test how the camera can be used in challenging environments. But crucially, it will also explore new journalistic conventions. How do you tell a story in VR? What does narrative look like?  Dan is currently in Guinea with the camera and he will be traveling to Liberia and Sierra Leon in early 2015.

Third we will then be testing new post-production VR processes, including the addition of interactivity and multimedia into the VR environment.

The demo will be launched in the spring alongside the release of the feature Frontline documentary and with an accompanying report documenting the experiment and what we have learned. We will also be hosting a panel on VR journalism and this year’s SXSW featuring James Milward, Nonny de la Pena, and head of Vice News, Jason Mojica.

We are all acutely aware that this emerging practice, while exciting, presents some challenging questions.

For the practice of journalism virtual reality presents a new technical and narrative form. It requires new cameras, new editing and shooting processes, new viewing infrastructure, new levels of interactivity, and can leverage distributed networks in new ways. In addition to these technical innovations, an emerging scholarly discourse is exploring how virtual reality also challenges traditional notions of narrative form. Virtual reality, combined with the ability to add interactive elements, changes the positionality of the journalists, breaking down the fourth wall of journalism. Storytelling is pulled from its bound linear form, and moved to a far more fluid space where the audience has new (though still limited) agency in the experience of the story. This changes how journalists must construct their story and their place in it, and challenges core journalistic assumptions of objectivity and observation.  It also changes how audiences engage with journalism, bringing them into stories in a visceral experiential manner not possible in other mediums.

CBC Interview on Virtual Reality Journalism with Taylor Owen and Nonny de la Pena

More conceptually, virtual reality journalism also offers a new window through which to study the relationship between consumers of media and the representation of subjects. Whereas newspapers, radio, television and then social media each brought us closer to being immersed in the experience of others, virtual reality has the potential to further break down this distance. A core question is whether virtual reality can provide similar feelings of empathy and compassion to real life experiences. Recent work has shown that virtual reality can create a feeling of ‘social presence,’ the feeling that a user is really there, which can create far great empathy for the subject than in other media representations. Others have called this experience ‘co-presence,’ and are exploring how it can be used to bridge the distance between those experiencing human rights abuses and those in the position to assist or better understand conflict.

It is our hope that this initial project, as well as a planned larger multiyear research project, will begin to shed light on some of these questions.

Past Events

Buzzfeed, Data, and the Future of Journalism: Reflections on a Recent Tow Tea

0

By Ilia Blinderman

As a young journalist, it’s often difficult to maintain a positive outlook when considering the future of the industry. And yet, if one were to pick an outlet likely to weather the storm of ambiguity facing today’s media, Buzzfeed would make for an unusually safe bet. Certainly, whatever your opinion of the former listicle clearing house’s journalistic forays (many of which have been unqualified successes, despite the company’s issues of shedding its digital variety act-image and communicating a new, more rigorous persona), Buzzfeed is doing very, very well. As I learned at an early December Tow Tea, the staggering number of hits the site generates is in no small part due to the efforts of its data science team.

In fact, Ky Harlin, Buzzfeed’s Director of Data Science, alongside Samir Mezrahi, a longtime Buzzfeed reporter, noted that data science has played an integral role in the company for many years. The results of Harlin’s A/B tests, where some users see one variant of a post, and others see another, are unequivocal. Buzzfeed editors have learned to tweak a post’s headlines, photo types, and list item order to maximize the virality of the content, which, in turn, varies by social network. These rules, according to Harlin, do not indicate editorial direction, but rather furnish journalists with a set of new media best practices which dovetail with editorial judgement.

Harlin and Mezrahi were thoughtful, unassumingly confident, and analytically savvy, none of which surprised the members of the media; most were aware of Buzzfeed’s meteoric rise and remarkable performance. It merits mention, however, that at the precise hour that the pair spoke to a packed audience of journalists, students, and researchers, the public was learning that one of America’s most revered magazines was undergoing a violent upheaval. The New Republic, which had found an uncanny balance between the cerebral and the comprehensible, was being transplanted from Washington, D.C., to New York; Franklin Foer, its editor, and Leon Wieseltier, who had helmed its cultural coverage for over three decades, were out. Writer and critic Walter Kirn was, perhaps, most eloquent in his reaction: “Franklin Foer is leaving [The New Republic] along with Leon Wieseltier, which is like the soul leaving the body.” The future of high-quality long-form journalism — which Chris Hughes, The New Republic’s owner, told readers he sought to ensure — was once more in perilous straits.

The question to ask at this juncture is not whether data science has the answer for Buzzfeed — in the site’s earlier incarnation, the response is a resounding yes. More interesting, and orders of magnitude more challenging, will be its role in shoring up Buzzfeed’s longer, more thought-provoking content. I can only hope for the best for The New Republic and its staff, both former and current, but can only speculate, albeit optimistically, about its new direction. Buzzfeed, meanwhile, is making laudable strides towards becoming a respected source of long-form reporting. Whether data science can catapult it to the forefront of outlets producing rigorous long-form content remains to be seen.

Ilia Blinderman is a data journalist and writes about culture and science.  He is studying in Columbia’s Lede program.  Follow him on twitter at @iliablinderman

 

Research, The Tow Center

Our First Three Discoveries About Audience Metrics at Chartbeat, Gawker, and the New York Times

0

Last year, Buzzfeed published lists of the most tweeted, Facebooked, and searched-for stories across its now-defunct partner network, which included sites like The New York Times, the Huffington Post, the Atlantic. The lists provided plentiful (if unsurprising) fodder for anyone who is worried about the future of participatory democracy in an age when we get more and more of our news from Facebook: the top 10 most-shared stories on the social networking site included “27 Shocking And Unexpected Facts You Learn In Your Twenties,” “What Happens If You Text Your Parents Pretending To Be A Drug Dealer?” and the Times’s wildly successful dialect quiz, “How Y’all, Youse, and You Guys Talk.” But the lists were also striking for a different reason: they underscored the fact that the online habits of news readers are being tracked constantly. While Buzzfeed’s lists focused on clicks and shares, many analytics dashboards go far beyond these measures to include time spent reading, scroll depth (how far one scrolls down a page before clicking on something else), recirculation (how many pages one consecutively visits on a particular site), and visit frequency over time.

The sheer ubiquity of this data makes the longstanding professional argument about whether metrics have any place in a newsroom seem almost quaint. The question isn’t whether metrics should be in newsrooms to begin with – they already are, and they’re not going anywhere. The question being debated nowadays is: now that all this data is available, what are the best audience measures of audience behavior and what should journalists do with them?

Important questions, to be sure. But the premise of my Tow project is that before attempting to answer these normative questions, it is helpful to investigate some empirical ones. How do analytics companies produce metrics? What values, assumptions, and ideas about journalism are embedded within these numbers? How do analytics tools interact with established work routines and organizational dynamics in different types of news organizations? Answering these questions requires that we set aside the idea that data can “speak for itself,” and instead examine the creation and use of news metrics as social processes – that is, as active decisions that are carried out by groups of human beings. Over the past year and a half, I’ve been studying metrics this way by observing day-to-day operations and conducting interviews at Chartbeat, Gawker Media, and the New York Times.

Here are some things I’ve discovered:

  • News metrics are hard to interpret because journalism has multiple aims. The meaning of metrics is relatively straightforward in fields with a singular, easily measurable goal: the Oakland A’s in Michael Lewis’s Moneyball were trying to maximize wins, and everyone agreed on what constituted a win. Journalism is (and has always been) considerably more complicated, because there are many ways a news story can be said to “win”: it can break news, it can prompt a legal change, it can shatter a damaging myth, it can be read by a huge number of people, and so on. This complexity means that news metrics can come to resemble Rorschach tests; as a reporter at the Times put it, “you rarely have an apples to apples comparison….There are so many other things kind of confounding it. It’s very easy for everybody to read their own agenda into the numbers.” At Chartbeat, employees wrestle with the question of how much the analytics dashboard should interpret metrics and recommend actions. Provide too little interpretation and the clients may see the dashboard as insufficiently useful or “actionable”; provide too much and editors may view the tool as a usurper and resent it. In sum, the way in which metrics are interpreted (and by whom) tells us much about the power dynamics and internal politics of the media field.
  • Organizational culture matters – a lot – when it comes to how metrics are used. In other words, the mere existence of data in newsrooms does not tell us much about what journalists do with it. Case in point: the Times and Gawker both use similar Chartbeat dashboards (among other analytics tools) but the organizations use this data in completely different ways. At Gawker, metrics are a prominent presence in the newsroom: large screens displaying real-time traffic rankings of stories and individuals famously loom over writers’ heads in the company’s offices; these rankings are also publicly available online and help determine bonuses. At the Times, access to analytics tools is mostly confined to editors (though this may soon be changing in the wake of the Innovation Report), and reporters’ traffic is not a factor in performance reviews. These different approaches for dealing with the same metrics can be traced back to each organization’s particular history, structure, and culture – yet these factors often get overlooked in conversations about how metrics are affecting news.
  • Metrics serve social and emotional functions just as much as rational ones. We tend to think of the analytics dashboard as a dispassionate tool whose purpose is to provide objective data about reader behavior. For this reason, metrics have gained a reputation as ego-busters, as journalists can discover that their readership is smaller and far less attentive than they imagined. While some find this information to be helpful, if humbling, others can sour on a tool if it only tells them things they don’t want to hear. To avoid this, Chartbeat builds into the ostensibly neutral dashboard opportunities for optimism and celebration, sometimes even at the expense of the data’s utility. For instance, the dashboard’s traffic dial is designed to “break” when clients’ traffic is surging past a pre-set cap. As one Chartbeat employee put it to me, when the dial maxes out “the product is broken, but in a fun way… if you didn’t have that excitement, [the product] wouldn’t work.”

These findings, along with others I’ll introduce in the full report (to be launched in late March) illustrate a basic – though easily overlooked – truth: to know the implications of metrics for journalism, we must first understand how this data is created, interpreted, and used by real people in actual organizations.

Research

What’s in a Ranking?

0

The web is a tangled mess of pages and links. But through the magic of the Google algorithm it becomes a nice and neatly ordered rank of “relevance” to whatever our heart desires. The network may be the architecture of the web, but the human ideology projected on that network is the rank.

Often enough we take rankings at face value; we don’t stop to think about what’s really in a rank. There is tremendous power conferred upon the top N, of anything really, not just search results but colleges, restaurants, or a host of other goods. These are the things that get the most attention and become de facto defaults because they are easier for us to access. In fact we rank all manner of services around us in our communities: schools, hospitals and doctors, even entire neighborhoods. Bloomberg has an entire site dedicated to them. These rankings have implications for a host of decisions we routinely make. Can we trust them to guide us?

Thirty years ago, rankings in the airline reservation systems used by travel agents were regulated by the U.S. government. Such regulation served to limit the ability of operators to “bias travel-agency displays” in a way that would privilege some flights over others. But this regulatory model for reigning in algorithmic power hasn’t been applied in other domains, like search engines. It’s worth asking why not and what that regulation might look like, but it’s also worth thinking about alternatives to regulation that we might employ for mitigating such biases. For instance we might design advanced interfaces that transparently signal the various ways in which a rank and the scores and indices on which it is built are constituted.

Consider an example from the local media, the “Best Neighborhoods” app, published by the Dallas Morning News (shown below). It ranks various neighborhoods according to criteria like the schools, parks, commute, and walkability. The default ranking of “overall” though is unclear: How are these various criteria weighted? And how are the various criteria even defined? What does “walkability” mean in the context of this app? If I am looking to invest in property I might be misled by a simplified algorithm; does it really measure the dimensions that are of most importance? While we can interactively re-rank by any of the individual criteria, many people will only see the default ranking anyway. Other neighborhood rankings, like the one from the New Yorker in 2010, do show the weights, but they’re non-interactive.

neighborhoods

The notion of algorithmic accountability is something I’ve written about here previously. It’s the idea that algorithms are becoming more and more powerful arbiters of our decision making, both in the corporate world and in government. There’s an increasing need for journalists to think critically about how to apply algorithmic accountability to the various rankings that the public encounters in society, including rankings (like neighborhood rankings) that their own news organizations may publish as community resources.

What should the interface be for an online ranking so that it provides a level of transparency to the public? In a recent project with the IEEE, we sought to implement an interface for end-users to interactively re-weight and visualize how their re-weightings affected a ranking. But this is just the start: there is exciting work to do in human-computer interaction and visualization design to determine the most effective ways to expose rankings interactively in ways that are useful to the public, but which also build credibility. How else might we visualize the entire space of weightings and how they affect a ranking in a way that helps the public understand the robustness of those rankings?

When we start thinking about the hegemony of algorithms and their ability to generalize nationally or internationally there are also interesting questions about how to adapt rankings for local communities. Take something like a local school ranking. Rankings by national or state aggregators like GreatSchools may be useful, but they may not reflect how an individual community would choose to weight or even select criteria for inclusion in a ranking. How might we adapt interfaces or rankings so that they can be more responsive to local communities? Are there geographically local feedback processes than might allow rankings to reflect community values? How might we enable democracy or even voting on local ranking algorithms?

In short, this is a call for more reflection on how to be transparent about the data-driven rankings we create for our readers online. There are research challenges here, in human-centered design, in visualization, and in decision sciences that if solved will allow us to build better and more trustworthy experiences for the public served by our journalism. It’s time to break the tyranny of the unequivocal ranking and develop new modes of transparency for these algorithms.

Announcements, Past Events

Upcoming Events: Journalism After Snowden

0

Journalism After Snowden Closing Event

On February 5th, 2015, the Tow Center will be hosting an event on National Security Reporting in the Age of Surveillance: A Conversation About Reporting Post-Snowden with Dean Baquet of the New York Times, Marty Baron of the Washington Post, Susan Glasser of Politico, and Steve Coll, of Columbia Journalism School.  The event will be at the Knight Conference Center at the Newseum in Washington, D.C. and will include the launch of a new Pew study.

More information about RSVP to come

1606909_791412234264266_7194605208538394787_n

Past Events

Ta-Nehisi Coates at the Tow Center

0

By Alexandria Neason

Two weeks ago, students, alumni, faculty, and others packed the lecture hall at Columbia Journalism School to see The Atlantic’s Ta-Nehisi Coates, perhaps America’s foremost writer on race, speak about the media’s often contentious relationship with it.  Lately, the news has been more saturated with conversations about race than I can ever remember. Coverage of the policing and killing of black boys and men, of the burden of raising black children, of reparations and so-called “post-racial” multiculturalism has brought to the mainstream what people of color have long known.  America (still) has a race problem.

Photo by Rhon Flatts
Photo by Rhon Flatts

When I think about the history of American journalistic writing on race, it is difficult to separate the writing from the activism that it often highlights.  It’s hard to imagine white, Northern journalists traveling to the unapologetically violent 1960’s era South without some fledging belief that what was happening was wrong. It is hard to imagine that a journalist could cover the killings of Sean Bell, of Trayvon Martin and Oscar Grant and Jordan Davis and Mike Brown and all the others – without an understanding that something, somewhere, is deeply wrong.

I’d come to hear Coates speak because I wanted to know how he did it.  I wanted to know how he confronted the on-going legacy of American racism every day on his blog, subject to anonymous “keyboard commanders” as he referred to them.  I wanted to know how he dealt with what I can only assume were vile emails in response to his June cover story on the case for African-American reparations. I wanted to know how he wrote about racist housing policies and the constant loss of young, black life, without becoming disempowered by lack of change.  I’d come to hear how he kept going even when the idealisms of journalism – to affect change – proved elusive.

Coates talked at length about how he got his start in journalism. He spoke about dropping out of Howard University, and about his disinterest in romanticizing that.  He spoke about the importance of learning to report and to write.  He spoke of the difference between the two.  He spoke about waking up with aching questions and going to bed still bothered by them.  He spoke about reading, about writing constantly (even if it’s bad) as a means of practicing.  He talked about the absolute need to practice.  He told us that writing needs to be a top priority, below only family and health, if we hope to make a career out of it. He told us that if we didn’t love it, to leave.  “It’s just too hard,” he said.

And then he contradicted much of what I’d been taught about journalism. He told us not to expect to change anything with our writing.

I was startled. Wasn’t that the point? To educate, to inform, and ultimately, to change?

No. Coates doesn’t write to change the world. He doesn’t write to change the minds of white people (and he warned both white writers and writers of color of the dangers in doing this). “Don’t write to convince white people,” he said. I found in that statement what is perhaps the advice I needed in order to keep writing.

For a black man, writing about race in a country hard-pressed to ignore its long marriage to it, and doing so with precision and integrity and without apology is an act of defiance in and of itself.  Writing to speak, unburdening oneself of the responsibility of educating your opponents (and, in doing so, inadvertently educating a great deal of people), is how you keep touching on the untouchable subjects.

After the lecture, a small group of students gathered with Coates in the Brown Institute for a writing workshop sponsored by the Tow Center for Digital Journalism.  We were treated to a detailed walkthrough of Coates explosive June cover story on reparations for African-American descendants of slaves.  We learned about how he began his research, how he stayed organized, how he developed his argument and how it evolved over the year and a half that he worked on the piece.  It was clear that he was quite proud of it, not because he changed the minds of readers or because it had drawn so much attention to an issue often brushed off as impossible, but because he’d buried himself in the research, because he’d found a way to put a living, human face on the after-effects of policies that we often discuss as though they have none.  The piece is heavily reliant on data, but littered with human faces, human stories, human consequences.  It is deeply moving.  To me, it was convincing. Overwhelmingly, undeniably convincing.  And yet, his motivation was not to convince me, or anyone else, of anything.  He wrote to speak.

And speak he did.

Alexandria Neason is an education writer for the Teacher Project at Columbia Journalism School and a graduate of the M.S. in Journalism in 2014.

Research

Sensor Journalism: Communities of Practice

0

As seen in the Tow Center’s Sensors & Journalism report, the growing availability and affordability of low-cost, low-power sensor tools enables journalists and publics to collect and report on environmental data, irrespective of government agency agendas. The issues that sensor tools already help measure are manifold, i.e. noise levels, temperature, barometric pressure, water contaminants, air pollution, radiation, and more. When aligned with journalistic inquiry, sensors can serve as useful tools to generate data to contrast with existing environmental data or provide data where previously none existed. While there are certainly various types of sensor journalism projects with different objectives and outcomes, the extant case studies (as outlined in the Tow report) provide a framework to model forthcoming projects after.

But it may not be enough to simply identify examples of this work.

Invariably, just as important as building a framework for sensor journalism is building a community of practice for it, one that brings together key players to provide a space for asking critical questions​, sharing best practices,​ and fomenting connections/collaborations. Journalism certainly doesn’t happen in a vacuum; it is best served by collaborations with and connections to outside sources. A sensor journalism community has already begun to emerge on a grassroots level, spanning multiple countries, disciplines, and issues of concern. We might look to the  nodes in​ this community to ​outline ​a protean map of stakeholders in the field:

Journalists.

Since public opinion can be driven by press coverage, and since storytelling is a central part of news, journalists, documentarians, and media makers with an interest in sensor data play an important role in shaping how the public perceives certain issues. More than that, the media also have the ability to highlight issues that may have slipped under the radar of policymakers. In this scenario sensor data could potentially serve as evidence for or against a policy decision. Most sensor journalism case studies so far have relied on normative forms (print, online) to convey the data and the story, but there is much room for experimentation, e.g. sensor-based documentary, radio, interactive documentary, data visualization, and more.

Educators.

In the classroom, there is an undeniable opportunity to cultivate a generation of journalists and media makers who are unintimidated by hardware and technology. Not only this — the classroom also becomes an ideal place to test technology without being beholden to the same restrictions or liabilities as professional newsrooms. Educators across the U.S. have begun incorporating DIY sensors into classroom projects (see Emerson College, Florida International University, and San Diego State University projects), the results of which touch on many of the same questions that professional journalists encounter when it comes to sensor tools. The teaching practices applied to sensor journalism can also be the foundations of training models for professional journalists and civic groups seeking to investigate issues.

Hardware developers.

Because hardware developers design and build the tools that journalists and others would potentially be using, they have a stake in terms of how the tool performs downstream of development. Journalists can also collaborate with hardware developers in identifying tools that would be most helpful: Journalists may have specific requirements of data accuracy, data resolution, range of measurement, or the maturity of their equipment. Likewise, hardware experts can recommend tools that provide access to raw data and transparent interpretation algorithms. On the flip side, some hardware developers, particularly in the open source community, may help identify potential environmental issues of concern that then inform journalists’ research. Recently, a conversation about certification of sensors, which originated within the hardware development community, crystallized around the notion of how to instantiate trust in open sensor tools (or sensors in general) when used for various purposes, journalism included.This is telling of how an open dialogue between hardware developers and journalists might be beneficial to defining these initial collaborative relationships.

Researchers.

Since using sensor tools and data in journalism is new, there is still significant research to be done around the effectiveness of such projects from both a scientific/technological standpoint as well as one of media engagement and impact. Researchers are also best poised, within academia, to examine tensions around data quality/accuracy, sensor calibration, collaborative models, etc. and help provide critical feedback on this new media practice.

Data scientists, statisticians.

Since most journalists aren’t data scientists or statisticians by training, collaborations with data scientists and statisticians have been and should be explored to ensure quality analysis. While some sensor journalism projects are more illustrative and don’t rely heavily on data accuracy, others that aim to affect policy are more partial to such considerations. Journalists working with statisticians to qualify data could contribute toward more defensible statements and potentially policy decisions.

Activists, advocates.

Because many open sensor tools have been developed and deployed on the grassroots level, and because there is a need to address alternative sources of data (sources that are not proprietary and closed off to the public), activists play a key role in the sensor journalism landscape. Journalists can sometimes become aware of issues from concerned citizens (like the Sun Sentinel’s “Above the Law” series about speeding cops); therefore, it’s essential to cultivate a space in which similarly concerned citizens can voice and discuss concerns that may need further investigation.

Urban designers, city planners, architects.

Many cities already have sensor networks embedded within them. Some of the data from these sensors are proprietary, but some data are publicly accessible. Urban designers, city planners, and architects look to data for context on how to design and build. For instance, the MIT SENSEable City Lab is a conglomerate of researchers who often look to sensor data to study the built environment. Sensor data about environmental factors or flow can help inform city design and planning decisions. Journalists or media makers can play a role in completing the feedback loop — communicating sensor data to the public as well as highlighting public opinions and reactions to city planning projects or initiatives.

Internet of Things.

Those working in the Internet of Things space approach sensor networks on a different level. IoT endeavors to build an infrastructure that includes sensors in almost everything so that devices can interact better with people and with each other. At the same time, IoT infrastructures are still in development and the field is just beginning to lay its groundwork in the public consciousness. Imagine motion sensors at the threshold of your house that signal to a network that you’re home, which then turns on the devices that you most commonly use so that they’re ready for you. Now imagine that on on a neighborhood or city scale. Chicago’s Array of Things project aims to equip the city with environmental sensors that can report back data in real time, informing residents and the city government about various aspects of the city’s performance. What if journalists could have access to this data and serve as part of a feedback loop back to the public?


By no means is this a complete map of the sensor journalism community. ​One would hope that the network of interested parties in sensor journalism continues to expand and include others — within policy, legacy news organizations, and more — such that the discourse generated by it is a representative one that can both challenge and unite the field. Different methodologies of collecting data with sensors involve different forms of agency. In some sensor journalism scenarios, the agents are journalists; in others, the agents are members of the public; and in others yet, the agents can be governments or private companies. Ultimately, who collects the data affects data collection methods, analysis of the data, and accessibility of the data. No matter what tools are used  — whether they are sensors or otherwise — the issues that journalists seek to examine and illuminate are ones that affect many, and on multiple dimensions (individual, local, national, global). If we are truly talking about solving world problems, then the conversation should not be limited to just a few. Instead, it will take an omnibus of talent and problem solving from various disciplines to pull it off.

References

Pitt, Sensors and Journalism, Tow Center for Digital Journalism, May 2014.

Chicago’s Array of Things project

Sun Sentinel’s “Above the Law” series

Past Events

Journalism After Snowden Lecture Series

0

Journalism After Snowden Lecture Series

Presented by: Columbia University Graduate School of Journalism and the Information Society Project at Yale Law School

 

September 29, 2014

6:00pm-7:30pm, with reception to follow

Brown Institute for Media Innovation at Columbia University Graduate School of Journalism

 

Source Protection: Rescuing a Privilege Under Attack

Speaker: David A. Schulz, Partner at Levine Sullivan Koch & Schulz, LLP

Moderator: Emily Bell, Director of the Tow Center for Digital Journalism

 

Watch Full Lecture 

David Schulz | Outside Counsel to The Guardian; Lecturer, Columbia Law School; Partner, Levine Sullivan Koch & Schulz LLP | @LSKSDave
David Schulz heads the New York office of Levine Sullivan Koch & Schulz, L.L.P. a leading media law firm with a national practice focused exclusively on the representation of news and entertainment organizations in defamation, privacy, newsgathering, access, copyright, trademark and related First Amendment matters. Schulz has been defending the rights of journalists and news organizations for nearly 30 years, litigating in the trial courts of more than 20 states, and regularly representing news organizations on appeals before both state and federal tribunals. Schulz successfully prosecuted access litigation by the Hartford Courant to compel the disclosure of sealed dockets in cases being secretly litigated in Connecticut’s state courts, and the challenge by 17 media organizations to the closure of jury selection in the Martha Stewart criminal prosecution. He successfully defended against invasion of privacy claims brought by Navy SEALS whose photos with injured Iraqi prisoners were discovered on-line by a reporter, and has prevailed in Freedom of Information Act litigation pursued by the Associated Press to compel the release of files relating to detainees held by the Department of Defense at Guantanamo Bay and to records of the military service of President George W. Bush. Schulz is described as an “incredibly skilled” litigation strategist and a “walking encyclopedia” of media law by Chambers USA (Chambers & Partners, 2006), and is recognized as one of the nation’s premier First Amendment lawyers by The Best Lawyers in America (Woodward/White, 2006). He regularly represents a broad range of media clients, including The New York Times, Associated Press, CBS Broadcasting, Tribune Company, and The Hearst Corporation, along with other national and local newspapers, television networks and station owners, cable news networks, and Internet content providers. Schulz is the author of numerous articles and reports, including Policing Privacy, 2007 MLRC Bulletin 25 (September 2007); Judicial Regulation of the Press? Revisiting the Limited Jurisdiction of Federal Courts and the Scope of Constitutional Protection for Newsgathering, 2002 MLRC Bulletin 121 (April 2002); Newsgathering as a Protected Activity, in Freedom of Information and Freedom of Expression: Essays in Honour of Sir David William (J. Beatson & Y. Cripps eds., Oxford University Press 2000); and Tortious Interference: The Limits of Common Law Liability for Newsgathering, 4 Wm. & Mary Law Bill Rts. J. 1027 (1996) (with S. Baron and H. Lane). He received a B.A. from Knox College in Galesburg, Illinois, where he has served for more than twenty years on the Board of Trustees. He received his law degree from Yale Law School, and holds a master’s degree in economics from Yale University.

 

 

Announcements, Past Events, Research

Sensors and Certification

0

This is a guest post from Lily Bui, a sensor journalism researcher from MIT’s Comparative Media Studies program.

On October 20, 2014, Creative Commons Science convened a workshop involving open hardware/software developers, lawyers, funders, researchers, entrepreneurs, and grassroots science activists around a discussion about the certification of open sensors.

To clarify some terminology, a sensor can either be closed or open. Whereas closed technologies are constrained by an explicitly stated intended use and design (e.g., an arsenic sensor you buy at Home Depot), open technologies are intended for modification and not restricted to a particular use or environment (e.g., a sensor you can build at home based on a schematic you find online).

Over the course of the workshop, attendees listened to sessions led by practitioners who are actively thinking about whether and how a certification process for open hardware might mitigate some of the tensions that have arisen within the field, namely around the reliability of open sensor tools and the current challenges of open licensing. As we may gather from the Tow Center’s Sensors and Journalism report, these tensions become especially relevant to newsrooms thinking of adapting open sensors for collecting data in support of journalistic inquiry. Anxieties about data provenance, sensor calibration, and best practices on reporting sensor data also permeate this discussion. This workshop provided a space to begin articulating the needs required for sensor journalism to move forward.

Below, I’ve highlighted the key points of discussion around open sensor certification, especially as they relate to the evolution of sensor journalism.

Challenges of Open Sensors

How, when, and why do we trust a sensor? For example, when we use a thermometer, do we think about how well or often it has been tested, who manufactured it, or what standards were used to calibrate it? Most of the time, the answer is no. The division of labor that brings the thermometer to you is mostly invisible, yet you inherently trust that the reading it gives is an accurate reflection of what you seek to measure. So, what is it that instantiates this automatic trust, and what needs to happen around open sensors for people to likewise have confidence in them?

At the workshop, Sonaar Luthra of Water Canary led a session about the complexities and challenges that accompany open sensors today. Most concerns revolve around accuracy, both of the sensor itself and the data it produces. One reason for this is that the manufacture and integration of sensors are separate processes (that is to say, for example, InvenSense manufactures an accelerometer and Apple integrates it into the iPhone). Similarly, within the open source community, the development and design of sensors and their software are often separate processes from an end user’s assembly—a person looks up the open schematic online, buys the necessary parts, and builds it at home. This division of labor erodes the boundaries between hardware, software, and data, obviating a need to recast how trust is established in sensor-based data.

For journalists, a chief concern around sensor data is ensuring, with some degree of confidence, that the data collected from the sensor is not erroneous and won’t add misinformation to the public sphere if published. Of course, this entirely depends on how and why the sensor is being used. If we think of accuracy as a continuum, then the degree of accuracy can vary depending on the context. If the intent is to gather a lot of data and look at general trends—as was the case with the Air Quality Egg, an open sensor that measures air quality—point-by-point accuracy is less of a concern when engagement is the end goal. However, different purposes and paradigms require different metrics. In the case of StreetBump, a mobile app that uses accelerometer data to help identify potential potholes, accuracy is a much more salient issue as direct intervention from the city would mean allocating resources and labor toward location data a sensor suggests. Thus, creating a model to work toward shared parameters, metrics, resources, and methods might be useful to generate consensus and alleviate factors that threaten data integrity.

There may also be alternative methods for verification and accounting for known biases in sensor data. Ushahidi’s Crowdmap is an open platform used internationally to crowdsource crisis information. The reports depend on a verification system from other users for an assessment of accuracy. One can imagine a similar system for sensor data, pre-publication or even in real time. Also, if a sensor has a known bias in a certain direction, it’s also possible to compare data against an established standard (e.g., EPA data) and account for the bias in reporting on the data.

To further investigate these questions, we can look toward extant models of verification in open science and technology communities. The Open Geospatial Consortium provides a way of thinking about interoperability among sensors, which requires that a consensus around standards or metrics be established. Alternatively, the Open Sensor Platform suggests ways of thinking about data acquisition, communication, and interpretation across various sensor platforms.

Challenges of Open Licensing for Sensors

A handful of licensing options exist for open hardware, including the CERN Open Hardware License, Open Compute Project License, and Solderpad License. Other intellectual property strategies include copyright (which can be easily circumvented and is sometimes questionable when it comes to circuits), patenting (which is difficult and costly to attain), and trademark (an option that offers a lower barrier to entry and would best meet the needs of open source approaches). However, the issue of whether or not formal licensing should be applied to open hardware is still questionable, as it would inevitably impose restrictions on a design or version of hardware that—within the realm of open source—is still susceptible to modification by the original developer or the open source community writ large. In other words, a licensing or certification process would transition what is now an ongoing project into a final product.

Also, in contrast to open software, wherein the use of open code is clearly demarcated and tracked by the process of copying and pasting, it is less clear at what point a user actually agrees to using open hardware (i.e., upon purchase or assembly, etc.) since designs often involve a multitude of components and are sometimes accompanied by companion software.

A few different approaches to assessing open sensors emerged during the workshop:

  1. Standards. A collaborative body establishes interoperable standards among open sensors, working for independent but overlapping efforts. (Targeted toward the sensor.)
  2. Certification/Licensing. A central body controls a standard, facilitates testing, and manages intellectual property. (Targeted toward the sensor.)
  3. Code of conduct. There exists a suggestion of uses and contexts for the sensor, i.e., how to use it and how not to use it. (Targeted toward people using the sensor.)
  4. Peer assessment. Self-defined communities test and provide feedback on sensors as seen in the Public Lab model. (Targeted toward the sensor but facilitated by people using it.)

In the case of journalism, methods of standardization would depend on how much (or little) granularity of data is necessary to effectively tell a story. In the long run, it may be that the means of assessing a sensor will be largely contextual, creating a need to develop a multiplicity of models for these methods.

Preliminary Conclusions

While there is certainly interest from newsrooms and individual journalists in engaging with sensor tools as a valid means for collecting data about their environments, it is not yet apparent what newsrooms and journalists expect from open sensors and for which contexts open sensor data is most appropriate. The products of this workshop are relevant to evaluating what standards—if any—might be necessary to establish before sensors can be more widely adapted into newsrooms.

In the future, we must keep some important questions in mind: What matters most to newsrooms and journalists when it comes to trusting, selecting, and using a sensor tool for reporting? Which sensor assessment models would be most useful, and in which context(s)?

With regard to the certification of open sensors, it would behoove all stakeholders—sensor journalists included—to determine a way to move the discourse forward.

References

  1. Pitt, Sensors and Journalism, Tow Center for Digital Journalism, May 2014.
  2. Bourbakis and A. Pantelopoulos, “A Survey on Wearable Sensor-based Systems for Health Monitoring and Prognosis,” Systems, Man, and Cybernetics, Part C: Applications and Reviews, Vol. 40, Iss. 1 (IEEE, Jan. 2010).
  3. Open Source Hardware Association (OSHWA), Definition page.

Events, Past Events

The Tow Responsive Cities Initiative

0

The Tow Responsive Cities Initiative
Workshop with Susan Crawford
Friday, 10/31 – 9:00 am

By invitation only

The extension of fiber optic high-speed Internet access connections across cities in America could provide an opportunity to remake democratic engagement over the next decade. Cities would have the chance to use this transformative communications capacity to increase their responsiveness to constituents, making engagement a two-way, nuanced, meaningful part of what a city does. The political capital that this responsiveness would generate could then be allocated to support big ideas that could address the problems facing many American cities, including growing inequality, diminishing quality of life, and movement of jobs outside the city’s borders.

announcements-home, Events, Past Events

Recap: Source Protection in the Information Age

0

“Assert the right to report.” That was the mandate Columbia’s Sheila Coronel gave our group of journalists and online privacy and security advocates this past Saturday morning, kicking off a day full of panels and workshop activities on the theme of “Source Protection in the Information Age.” In this post-Snowden age, we were reminded,
as scrutiny from the government and other authority structures intensifies, simple source protection becomes something more. As Aaron Williamson put it succinctly in the morning’s first panel: “Using encryption is activism. It’s standing up for your right to keep communications private.”

How to be an effective activist then? The day’s emphasis was intensely practical: Know your tools. We each had the opportunity to cycle through 6 of 14 available workshops. The spread covered effectively the typical activities journalists engage in: research, communication and writing. That translated into focuses on encrypted messaging via chat and email, location anonymous browsing via Tor, and access to desktop tools like the portable Tails operating system, which enables journalists to securely develop and store their research and writing. Snowden used Tails himself to escape the NSA’s scrutiny. We also received timely reminders about creating secure passwords and remembering that third parties are aware of our every move online.

Throughout, we were reminded of an important fact: You’re only as strong as your weakest participant. So journalists need not only to embrace these tools, they also need to educate their sources in how to use them effectively. They also need to learn how to negotiate the appropriate means and levels of security for communication with sources.

That’s where the user experience of these tools becomes so important. The most successful tools are bound to be those which are quick to install and intuitive to use. If some of those tools were as easy to download and install as a browser or plugin (Tor, Ghostery), others involved complex steps and technical knowledge, which might intimidate some users. That fact underlines the need to apply user-centered design principles to these excellent tools if they’re to be universally adopted. We have to democratize access to them.

Another tension point was the concern that using secure tools actually draws attention to the individual. A valid fear, perhaps, but the answer isn’t to abandon the tools but to employ them more often, even when security isn’t a concern. Increase the signal to noise ratio. On that note, the day was a success. Many of us, who were more or less aware of this issue, left not just enriched with more knowledge, but with laptops sporting a few more tools to empower us as activists.

Robert Stribley is the Associate Experience Director at Razorfish. You can follow him on Twitter at @stribs.

For resources and information about this and future events, visit our Source Protection: Resources page, and follow organizers/hosts Sandy Ordonez, Susan McGregor, Lorenzo Francesi-Biccherai and the Tow Center on Twitter.

announcements-home, Events, Past Events, Tips & Tutorials

Source Protection: Resources

2
We are happy to report that many of the attendees of our October 11 workshop on Source Protection in the Information Age left with a good foundation in digital security, and trainers gained a better understanding of the challenges journalists face in becoming more secure. 
This was a collaboratively organized event that brought together organizations and individuals passionate about the safety and security of journalists. We remain committed to continue supporting this collaboration, and will be planning future workshops. 
If you weren’t able to attend the event, we recommend starting with this brief recap. In addition, we would like to share some resources that you may find useful for continuing to develop your skills and understandings in this area.
Enjoy!
The organizers
(Lorenzo, Susan, Sandy & George)

Workshop Panel Videos

Panel 1: How technology and the law put your information at risk

Runa Sandvik, James Vasile, Aaron Williamson | Moderated by Jenn Henrichsen

Panel 2: Source protection in the real world – how journalists make it work

Online Resources

Workshop Resources

Online Library

Tactical Tech Collective

Tactical Tech’s Privacy & Expression program builds digital security awareness and skills of independent journalists, and anyone else who is concerned about the security risks and vulnerabilities of digital tools. On their website you can find manuals, short films, interactive exercises and well designed how-to’s. 

Upcoming Privacy & Security Events

October 20 | 6:30pm | Tracked Online:  How its done and how you can protect yourself
Techno-Activism 3rd Mondays (TA3M) is a community-run monthly meetup that happens in 21 cities throughout the world. It is a good place to meet and learn from individuals that work on anti-surveillance and anti-censorship issues. The October edition of NYC TA3M will feature former product lead of Ghostery who will explain how 3rd parties track you online, what information they collect, and what you can do to protect yourself. If you would like to be alerted of upcoming TA3m events, contact Sandra Ordonez @ sandraordonez@openitp.org
RSVP: 

Circumvention Tech Festival

The Circumvention Tech Festival will occur on March 1-6 in Valencia, Spain. The festival gathers the community fighting censorship and surveillance for a week of conferences, workshops, hackathons, and social gatherings, featuring many of the Internet Freedom community’s flagship events. This includes a full day of journo security events, which will be conducted both in English and Spanish. This is a great opportunity to meet the digital security pioneers. 
RSVP: 

 

Research

The New Global Journalism: New Tow Center Report

2
Throughout the twentieth century the core proposition of foreign correspondence was to bear witness—to go places where the audience couldn’t and report back on what occurred.Three interrelated trends now challenge this tradition. First, citizens living through events can tell the world about them directly via a range of digital technologies. Second, journalists themselves have the ability to report on some events, particularly breaking news, without physically being there.  Finally, the financial pressures that digital technology have brought to legacy news media have forced many to close their international bureaus.In this age of post-legacy media, local reporters, activists, ordinary citizens—and traditional foreign correspondents—are all now using digital technologies to inform the world of breaking news, and to offer analysis and opinions on global trends. These important changes are documented in the Tow Center’s report The New Global Journalism: Foreign Correspondence in Transition.The report’s authors include Kelly Golnoush Niknejad, founder of Tehran Bureau, a digital native site based in London and reporting on Iran, using dispatches from correspondents both inside and outside the country. Anup Kaphle, digital foreign editor of The Washington Post, profiles the legacy foreign desk in transition, as it seeks to merge the work of traditional correspondents and the contributions of new digital reporters, who may never leave the newsroom as they write about faraway events.

These new practices require new skills, and Ahmed Al Omran, a Wall Street Journal correspondent in Saudi Arabia, walks reporters through eight tactics that will improve their use of digital tools to find and verify information on international stories. Internet security issues raised by this work is discussed in another chapter, by Burcu Baykurt, PhD candidate at Columbia Journalism School, who also examines how Internet governance affects journalists and others using the web to disseminate information.

And Jessie Graham, a former public radio reporter who is now a multimedia producer for Human Rights Watch, describes the shifting line between advocacy and journalism. Some of the journalists affected by closings of foreign bureaus have made comfortable transitions to jobs in advocacy organizations — while those same organizations have increasingly developed their media skills to communicate directly with audiences, without the filter of mainstream media.

Through practical guidance and descriptions of this changing journalistic ecosystem, the Tow Center hopes that The New Global Journalism can help define a new, hybrid foreign correspondent model—not a correspondent who can do everything, but one open to using all reporting tools and a wide range of sources to bring audiences a better understanding of the world.

 

Download the Full Report Here: The New Global Journalism

Chapters:

Being There: The Virtual Eyewitness, By Kelly Golnoush Niknejad

The Foreign Desk in Transition: A Hybrid Approach to Reporting From There—and Here, By Anup Kaphle

A Toolkit: Eight Tactics for the Digital Foreign Correspondent, By Ahmed Al Omran

David Versus Goliath: Digital Resources for Expanded Reporting—and Censoring,  By Burcu Baykurt

A Professional Kinship? Blurring the Lines Between International Journalism and Advocacy, By Jessie Graham

Edited By: Ann Cooper and Taylor Owen

Research

Sensors and Journalism: ProPublica, Satellites and The Shrinking Louisiana Coast

2

Two months before the programmers journalists at ProPublica would be ready to go live with an impressive news app illustrating the huge loss of land along the Louisiana coast line, the development team gathered in their conference room above the financial district in Manhattan.

This was partly a show-off session and partly a review. Journalist-developers Brian Jacobs and Al Shaw pulled a browser up onto the glossy 46-inch screen and loaded up their latest designs. At first it appears to be simple satellite photography, spanning about 20,000 square miles, but the elegance hides a complicated process to pull layers of meaning from many rich data sets.

At the heart of the story is the fact that the Louisiana coastline loses land at a rate equivalent to a football field each hour. That comes to 16 square miles per year. The land south of New Orleans has always been low-lying, but since the Army Corps of Engineers built levees along the Mississippi after the huge 1927 floods, the delta has been losing ground. Previously, the river carried sediment down and deposited it to gradually build up dry land throughout the delta. The same levees that protect upstream communities also block that sediment from reaching the upstream river and floating down to become Louisiana coastline. Environmental researchers say that the energy industry’s canal-dredging and well-drilling have accelerated natural erosion. Together, the constricted river and the oil extraction have exacerbated the effect of sea level rises from climate change.

The loss of ground endangers people: The dry land used to provide protection to New Orleans’ people and businesses, because when storms like Hurricane Katrina sweep in from the Gulf Coast, they lose power as they move from the water to land. It’s therefore crucial to have a wide buffer between the sea and the city. Now, with 2,000 fewer acres of protective land, the state will have to spend more money building tougher, higher walls, flood insurance will be more costly, infrastructure could break and the people inside those walls risk death and injury at much higher rates. If the land-loss isn’t slowed the costs will get higher.

Satellites Clearly Show The Story

For this story, Al Shaw’s goal was to illustrate the scale and severity of the problem. Print journalists have written extensively on the story. But the forty years worth of remote sensing data available from NASA’s Landsat satellites helped the ProPublica journalists to show the story with immediate power and clarity. They processed Landsat 8 sensing data themselves and drew on the US Geological Survey’s interpretations of data from earlier Landsat craft.

The project combines a high-level view with eight zoomed-in case studies. The scene of land, marsh and water known locally as the Texaco Canals forms one of the most dramatic examples. Starting with data collected from aerial photography in the 1950s and ending with 2012 satellite data, the layered maps show how the canals sliced up the marshlands and the relocated soil stopped sediments replenishing the land. The result is an area that starts mostly as land, and ends mostly as open water. Contemporary and archival photos complement the birds-eye view with a human level perspective.

This is Satellite Sensing’s Learning Curve

At this point, we need to reveal a conflict of interest. In February 2014 The Tow Center provided training to four journalists from ProPublica. Lela Prashad, a remote sensing specialist who has worked with NASA led a two day workshop covering the fundamental physics of satellite sensing, a briefing on the different satellite types and qualities, where to find satellite data and the basics of how to process it. ProPublica news apps director Scott Klein had been at a Tow Center journalistic sensing conference eight months earlier to see a presentation by Arlene Ducao and Ilias Koen on their satellite infra-red maps of Jakarta and saw that ProPublica’s innovative newsroom might be able to use remote sensing to cover some of their environmental stories in new ways.

The ProPublica journalists, to produce this work, learnt the physics and applications of remote sensing technology. The earth’s surface pushes energy out into the atmosphere and space – some is an immediate reflection of the sun’s rays, some is energy absorbed earlier. Human sources like city lights and industrial activity also produce emissions. Energy waves range from the high-frequency, short wavelength gamma-rays and x-rays, through ultra-violet then into the visible spectrum (what human eyes sense) and on towards the longer wavelengths of infra-red, microwave and radio.
Satellites flown by NASA (and increasing numbers of private companies) point cameras towards Earth taking pictures of the energy which passes through the atmosphere. Those are the ultraviolet, visible and infrared bands. (The various generations of satellites have had different capabilities. As they develop, they have recorded Earth with more detail and pass overhead more frequently.)
Those scenes, when processed, can reveal with great accuracy the materials that form Earth’s surface. The exact hue of each pixel represents the specific substance below. Geologists needing to identify types of rock take advantage of the fact that, for example, sandstone reflects a different combination of energy waves than granite. Food security analysts can assess the moisture, and therefore the health, of a country’s wheat crop – helping them predict shortages (and speculators predict pricing). ProPublica is showing the Louisiana coast-line change over time from dry land, to marsh, to open water.

The US Geological Survey (USGS) makes its data available through a series of free online catalogues. Registered users can nominate the area they are interested in, pick a series of dates and download image files which include all the available energy bands. Crucially, those image files include the Geographic Information Systems (GIS) meta-data that allow the journalists to precisely match the pixels in data files to known locations.

 

How The Developers Built it

Brian Jacobs learned how to to reproduce and combine the information in an accessible form for ProPublica’s online audience. The opening scene of the app has eight layers. The top one uses scanned copy of a 1922 survey map owned by the USGS and scanned by the Louisiana State University library. Jacobs pulled it into his mapping software to match the geographic features with GIS location data and used photoshop to prepare it for online display; cutting out the water and normalizing the color.

The bottom layer displays the 2014 coastline – stitched together from six Landsat 8 tiles, including many steps of processing. Jacobs picked out images from satellite passes when the skies were free from cloud cover. After pulling in the image tiles from the infrared and true-color bands and merging them together, Jacobs normalized the distortions and color differences so the separate images would mosaic consistently.
Working with the command-line tools, GDAL (a geospatial library) and ImageMagick (an image editing suite), he prepared them for online display. Pictures of the Earth’s curved surface need to be slightly warped to make sense on a flat images, the types of warps are called projections. The raw USGS images come in the space industry’s WGS84 projection standard, but the web mostly uses Mercator. (Here’s Wikipedia‘s explanation, and xkcd’s cartoon version.)

Researchers who work with remote sensing have a specific language and sets of practices for how they treat color in their visualizations. The ProPublica journalists adopted some of those practices, but also needed to produce their work for a lay audience. So, although the features on ProPublica’s maps are easily recognizable, they are not what’s known as ‘true color’. When viewers look closely at the bottom layer, it’s clear that these are not simply aerial photographs. In comparison to satellite photography displayed via Google maps, the ProPublica layer has a much sharper contrast between land and water. The green pixels showing land are vibrant, while the blue sections showing water are rich, deep blues.

The color palette is, in fact, a combination of two sets of satellite data: The water pixels are interpreted from Landsat’s infrared and green bands, while the land pixels come from Landsat ‘true color’ red, green and blue bands, with extra sharpening from the panchromatic band (panchromatic appears as shades of gray, but can be interpreted to color). At 30m/pixel, Landsat’s color bands are lower resolution than its 15m/pixel panchromatic band.

Step By Step Frames

This is a detail of a single tile, in a single band, but color-corrected.

A detail of a single tile in the true-color space, somewhat color-corrected.

At this point, the developers have stitched together multiple tiles of their area, and combined images from many wavelength bands, a process known as pansharpening.

At this point, the developers have stitched together multiple tiles of their area, and combined images from true-color and panchromatic bands, a process known as pansharpening.

The water mask

This is the mask that ProPublica produced from the near-infrared and green bands. It’s used to make a distinction between the areas of land and water.

Pansharpened, zoomed

This frame shows the final result of ProPublica’s satellite image processing. At this point the images have been pansharpened and the water layer has been included from the near IR and green band.

The final view that ProPublica showed their users.

This shows the underlay for ProPublica’s case studies. This land pixels combine true color bands and the high-resolution panchromatic band. The water pixels come from the infrared and green bands.

Google satellite view, zoomed

The same area, as shown in Google maps’ satellite view. Mostly, it uses true-color satellite imagery for land, and bathymetry data for water.

A detail of the USGS map. Each color represents a period of land loss. ProPublica extracted each period to separate layers in their interactive map.

A detail of a USGS map of the region. Each color represents a period of land loss. ProPublica extracted each the pixels of period to separate layers into an interactive map.

The other layers come from a range of sources. In the opening scene, viewers can bring up overlays of the human building associated with the oil and gas industry: including the wells and pipelines, the dredged canals and the levees that protect the homes and businesses around the coastline.

When users zoom in to one of ProPublica’s seven case studies, they can scrub through another 16 layers. Each one shows a slice of time when the coast line receded. A layer of olive green pixels indicates the remaining dry land. The data for these 16 layers came from researchers at the US Geological Survey (USGS) who had analyzed 37 years of satellite data combined with historical surveys and mid-century aerial photography. ProPublica’s worked with John Barras at the USGS, a specialist who could draw on years of his own work and decades of published studies. He handed over a large geo-referenced image file exported from the software suite ERDAS Imagine. Each period’s land loss was rendered in a separate color

The Amount of Time, Skill and Effort

Scott Klein described this project as one of ProPublica’s larger ones, but not abnormally so. His team of developer-journalists release around twelve projects of this size each year, as well as producing smaller pieces to accompany work by the rest of ProPublica’s newsroom.

For six months, the project was a major focus for Al Shaw and Brian Jacobs. Both Shaw and Jacobs are young, highly skilled and prized developer-journalists. Al Shaw has a BA, and is also highly active in the New York’s hacks/hackers community. Brian Jacobs is a Knight-Mozilla Fellow working at ProPublica, with a background that includes a year at MIT’s Senseable City Lab and four years as a UI designer at Azavea, a Philadelphia based geospatial software company. They worked on it close to full time, with oversight from their director Scott Klein. During the later stages, ProPublica’s design director David Sleight advised on the interaction design, hired a freelance illustrator and led user-testing. ProPublica partnered with The Lens, a non-profit public-interest newsroom based in New Orleans, whose environmental reporter Bob Marshall wrote the text. The Lens also sourced three freelance photo researchers and photographers for ProPublica.

ProPublica Have Shared Their Tools

To produce the work, ProPublica had to extend the ‘simple-tiles’ software library they use to publish maps – a process that soaked up months of developer time. They’ve now open-sourced that code – a move which can radically speed up the development process for other newsrooms who have skilled developers. In common with most news organizations, the interactive maps ProPublica has historically published have used vector graphics: which display as outlines of relatively simple geographic and city features like states, roads and building footprints. This project renders raster (aka bitmap) images, the kind of file used for complicated or very detailed visual information.

ProPublica’s explanation about their update to simple-tiles is available on their blog, and the code is available via github.

Their Launch

ProPublica put the app live on the 28th of August, exactly nine years after the Hurricane Katrina made New Orleans’ mayor order the city’s first ever mandatory evacuation.

Announcements, announcements-home, Events, Research

Upcoming Events

0

All-Class Lecture: The New Global Journalism

Tuesday, Sep. 30, 2014, 6:00pm

(Lecture Hall)

Based on a new report from the Tow Center, a panel discussion on how digital technology and social media have changed the work of journalists covering international events. #CJSACL

Panelists include report co-authors: 

Ahmed Al OmranSaudi Arabia correspondent at The Wall Street Journal

Burcu BaykurtPh.D. candidate in Communications at Columbia Journalism School

Jessie GrahamSenior Multimedia Producer at Human Rights Watch

Kelly Golnoush NiknejadEditor-in-Chief at Tehran Bureau

Program will be moderated by Dean of Academic Affairs, Sheila Coronel

Event begins at 6 PM

RSVP is requested at JSchoolRSVP@Columbia.edu

Announcements, announcements-home, Events

Upcoming Tow Event: Just Between You and Me?

1

Just between you and me?

(Pulitzer Hall – 3rd Floor Lecture Hall)

In the wake of the Snowden disclosures, digital privacy has become more than just a hot topic, especially for journalists. Join us for a conversation about surveillance, security and the ways in which “protecting your source” means something different today than it did just a few years ago. And, if you want to learn some practical, hands-on digital security skills—including tools and techniques relevant to all journalists, not just investigative reporters on the national security beat—stick around to find out what the Tow Center research fellows have in store for the semester.

The event will be held at 6 p.m. on Monday, August 25th in the 3rd Floor Lecture Hall of Pulitzer Hall. We welcome and encourage all interested students, faculty and staff to attend.

How It's Made, Research

Hyper-compensation: Ted Nelson and the impact of journalism

1

NewsLynx is a Tow Center research project and platform aimed at better understanding the impact of news. It is conducted by Tow Fellows Brian Abelson, Stijn DeBrouwere & Michael Keller.

“If you want to make an apple pie from scratch, you must first invent the universe.” — Carl Sagan

Before you can begin to measure impact, you need to first know who’s talking about you. While analytics platforms provide referrers, social media sites track reposts, and media monitoring tools follow mentions, these services are often incomplete and come with a price. Why is it that, on the internet — the most interconnected medium in history — tracking linkages between content is so difficult?

The simple answer is that the web wasn’t built to be *fully* connected, per se. It’s an idiosyncratic, labyrinthine garden of forking paths with no way to navigate from one page to pages that reference it.

We’ve spent the last few months thinking about and building an analytics platform called NewsLynx which aims to help newsrooms better capture the quantitative and qualitative effects of their work. Many of our features are aimed at giving newsrooms a better sense of who is talking about their work. This seemingly simple feature, to understand the links among web pages, has taken up the majority of our time. This obstacle turns out to be a shortcoming in the fundamental architecture of the web. But without it, however, the web might never have succeeded.

The creator of the web, Tim Berners Lee didn’t provide a means for contextual links in the specification for HTML. The world wide web wasn’t the only idea for networking computers, however. Over 50 years ago an early figure in computing had a different vision of the web – a vision that would have made the construction of NewsLynx a lot easier today, if not completely unnecessary.

Around 1960, a man named Ted Nelson came up with an idea for a structure of linking pieces of information in a two-way fashion. Whereas links on the web today just point one way — to the place you want to go — pages on Nelson’s internet would have a “What links here?” capability so would know all the websites that point to your page.

And if you were dreaming up the ideal information web, this structure makes complete sense: why not make the most connections possible? As Borges writes, “I thought of a labyrinth of labyrinths, of one sinuous spreading labyrinth that would encompass the past and the future and in some way involve the stars.”

Nelson called his project Xanadu, but it had the misfortune of being both extremely ahead of its time and incredibly late to the game. Project Xanadu’s first and somewhat cryptic release debuted this year: over 50 years after it was first conceived.

In the mean time, Berners-Lee put forward HTML with its one-way links, in the early 90s and it took off into what we know today. And one of the reasons for the web’s success is its extremely informal, ad-hoc functionality: anyone can put up an HTML page and without hooking into or caring about a more elaborate system. Compared to Xanadu, what we use today is the quick and dirty implementation of a potentially much richer and also much harder to maintain ecosystem.

Two-way linking would make not only impact research easier but also a number of other problems on the web. In his latest book “Who Owns the Future?”, Jaron Lanier discusses two-way linking as a potential solution to copyright infringement and a host of other web maladies. His logic is that if you could always know who is linking where, then you could create a system of micropayments to make sure authors get proper credit. His idea has its own caveats, but it shows the systems that two-way linking might enable. Chapter Seven of Lanier’s book discusses some of the other reasons Nelson’s idea never took off.

The desire for two-way links has not gone away, however. In fact, the *lack* of two-way links is an interesting lens through which to view the current tech environment. By creating a central server that catalogs and makes sense of the one-way web, Google’s adds value with its ability to make the internet seem more like Project Xanadu. If two-way links existed, you wouldn’t need all of the features of Google Analytics. People could implement their own search engines with their own page rank algorithms based on publicly available citation information.

The inefficiency of one-way links left a hole at the center of the web for a powerful player to step in and play librarian. As a result, if you want to know how your content lives online, you have to go shopping for analytics. To effectively monitor the life of an article, newsrooms currently use a host of services from trackbacks and Google Alerts to Twitter searches and ad hoc scanning. Short link services break web links even further. Instead of one canonical URL for a page, you can have a bit.ly, t.co, j.mp or thousands of other custom domains.

NewsLynx doesn’t have the power of Google. But, we have been working on a core feature that would leverage Google features and other two-way link surfacing techniques to make monitoring the life of an article much easier: we’re calling them “recipes”, for now (#branding suggestions welcome). In NewsLynx, you’ll add these “recipes” to the system and it will alert you of all pending mentions in one filterable display. If a citation is important, you can assign it to an article or onto your organization more generally. We also have a few built-in recipes to get you started.

We’re excited to get this tool into the hands of news sites and see how it helps them better understand their place in the world wide web. As we prepare to launch the platform in the next month or so, check back here for any updates.

Past Events

Why We Like Pinterest for Fieldwork: Research by Nikki Usher and Phil Howard

5

Nikki Usher, GWU

Phil Howard, UW and CEU

7/16/2014

Anyone tackling fieldwork these days can chose from a wide selection of digital tools to put in their methodological toolkit. Among the best of these tools are platforms that let you archive, analyze, and disseminate at the same time. It used to be that these were fairly distinct stages of research, especially for the most positivist among us. You came up with research questions, chose a field site, entered the field site, left the field site, analyzed your findings, got them published, and shared your research output with friends and colleagues.

 

But the post-positivist approach that many of us like involves adapting your research questions—reflexively and responsively—while doing fieldwork. Entering and leaving your field site is not a cool, clean and complete process. We analyze findings as we go, and involve our research subjects in the analysis. We publish, but often in journals or books that can’t reproduce the myriad digital artifacts that are meaningful in network ethnography. Actor network theory, activity theory, science and technology studies and several other modes of social and humanistic inquiry approach research as something that involves both people and devices. Moreover, the dissemination of work doesn’t have to be something that happens after publication or even at the end of a research plan.

 

Nikki’s work involves qualitative ethnographic work at field sites where research can last from five months to a brief week visit to a quick drop in day. She learned the hard way from her research for Making News at The New York Times that failing to find a good way to organize and capture images was a missed opportunity post-data collection. Since then, Nikki’s been using Pinterest for fieldwork image gathering quite a bit. Phil’s work on The Managed Citizen was set back when he lost two weeks of field notes on the chaotic floor of the Republican National Convention in 2000 (security incinerates all the detritus left by convention goers). He’s been digitizing field observations ever since.

 

Some people put together personal websites about their research journey. Some share over Twitter. And there are plenty of beta tools, open source or otherwise, that people play with. We’ve both enjoyed using Pinterest for our research projects. Here are some points on how we use it and why we like it.

 

How To Use It

  1. When you start, think of this as your research tool and your resource.   If you dedicate yourself to this as your primary archiving system for digital artifacts you are more likely to build it up over time. If you think of this as a social media publicity gimmick for your research, you’ll eventually lose interest and it is less likely to be useful for anyone else.
  2. Integrate it with your mobile phone because this amps up your capacity for portable, taggable, image data collection.
  3. Link the board posts to Twitter or your other social media feeds. Pinterest itself isn’t that lively a place for researchers yet. The people who want to visit your Pinterest page are probably actively following your activities on other platforms so be sure to let content flow across platforms.
  4. Pin lots of things, and lots of different kinds of things. Include decent captions though be aware that if you are feeding Twitter you need to fit character limits.
  5. Use it to collect images you have found online, images you’ve taken yourself during your fieldwork, and invite the communities you are working with to contribute.
  6. Backup and export things once in a while for safe keeping. There is no built-in export function, but there are a wide variety of hacks and workarounds for transporting your archive.

 

What You Get

  1. Pinterest makes it easy to track the progress of the image data you gather. You may find yourself taking more photos in the field because they can be easily arranged, saved and categorized.
  2. Using it regularly adds another level of data as photos and documents captured on phone and then added on Pinterest can be quickly field captioned and then re-catalogued, giving you a chance to review the visual and built environment of your field site and interrogate your observations afresh.
  3. Visually-enhanced constant comparative methods: post-data collection, you can go beyond notes to images and captions that are easily scanned for patterns and points of divergence. This may be going far beyond what Glaser and Strauss had imagined, of course.
  4. Perhaps most important, when you forget what something looks like when you’re writing up your results, you’ve got an instant, easily searchable database of images and clues to refresh your memory.

Why We Like It

  1. It’s great for spontaneous presentations. Images are such an important part of presenting any research. Having a quick publically accessible archive of content allows you to speak, on the fly, about what you are up to. You can’t give a tour of your Pinterest page for a job talk. But having the resource there means you can call on images quickly during a Q&A period, or quickly load something relevant on a phone or browser during a casual conversation about your work.
  2. It gives you a way to interact with subjects. Having the Pinterest link allows you to show a potential research subject what you are up to and what you are interested in. During interviews it allows you to engage people on their interpretation of things. Having visual prompts handy can enrich and enliven any focus group or single subject interview. These don’t only prompt further conversation, they can prompt subjects to give you even more links, images, videos and other digital artifacts.
  3. It makes your research interests transparent. Having the images, videos and artifacts for anyone to see is a way for us to show what we are doing. Anyone with interest in the project and the board link is privy to our research goals. Our Pinterest page may be far less complicated than many of our other efforts to explain our work to a general audience.
  4. You can disseminate as you go. If you get the content flow right, you can tell people about your research as you are doing it. Letting people know about what you are working on is always a good career strategy. Giving people images rather than article abstracts and draft chapters gives them something to visualize and improves the ambient contact with your research community
  5. It makes digital artifacts more permanent. As long as you keep your Pinterest, what you have gathered can become a stable resource for anyone interested in your subjects. As sites and material artifacts change, what you have gathered offers a permanent and easily accessible snapshot of a particular moment of inquiry for posterity.

 

Pinterest Wish-list

One of us is a Windows Phone user (yes really) and it would be great if there was a real Pinterest app for the Windows Phone. One touch integration from the iPhone, much like Twitter, Facebook, and Flicker from the camera roll would be great (though there is an easy hack).

 

We wish it would be easier to have open, collaborative boards. Right now, the only person who can add to a board is you, at least at first. You can invite other people to join a “group board” via email, but Pinterest does not have open boards that allow anyone with a board link to add content.

 

Here’s a look at our Pinboards: Phil Howard’s Tech + Politics board, and Nikki Usher’s boards on U.S. Newspapers. We welcome your thoughts…and send us images!

 

 

 

 

Nikki Usher is an assistant professor at the George Washington University’s School of Media and Public Affairs. Her project is Post Industrial News Spaces and Places with Columbia’s Tow Center on Digital Journalism. Phil Howard is a professor at the Central European University and the University of Washington. His project is a book on Political Power and the Internet of Things for Yale University Press.

 

Announcements, Events, Past Events, Research

Digital Security and Source Protection For Journalists: Research by Susan McGregor

3

EXECUTIVE SUMMARY

The law and technologies that govern the functioning of today’s digital communication systems have dramatically affected journalists’ ability to protect their sources.  This paper offers an overview of how these legal and technical systems developed, and how their intersection exposes all digital communications – not just those of journalists and their sources – to scrutiny. Strategies for reducing this exposure are explored, along with recommendations for individuals and organizations about how to address this pervasive issue.

 

DOWNLOAD THE PDF

GitBookCover

 

 

 



Order a (bound) printed copy.

 

DIGITAL SECURITY AND SOURCE PROTECTION FOR JOURNALISTS

Preamble

Digital Security for Journalists A 21st Century Imperative

The Law: Security and Privacy in Context

The Technology: Understanding the Infrastructure of Digital Communications

The Strategies: Understanding the Infrastructure of Digital Communications

Looking Ahead

Footnotes

 

Research

Knight Foundation joins The Tow Foundation as a sponsor for the initiative headed by Columbia University’s Tow Center for Digital Journalism

5

Knight Foundation joins The Tow Foundation as a sponsor for the initiative headed by Columbia University’s Tow Center for Digital Journalism

Tow Center program defends journalism from the threat of mass surveillance ” by Jennifer Henrichsen and Taylor Owen on Knight Blog 

NEW YORK – June 10, 2014 – The Journalism After Snowden initiative, a project of The Tow Center for Digital Journalism at Columbia University Graduate School of Journalism, will expand to further explore the role of journalism in the age of surveillance, thanks to new funding from the John S. and James L. Knight Foundation.

Journalism After Snowden will contribute high-quality conversations and research to the national debate around state surveillance and freedom of expression through a yearlong series of events, research projects and articles that will be published in coordination with the Columbia Journalism Review.

Generous funding from The Tow Foundation established the initiative earlier in the academic year. The initiative officially kicked off in January with a high-level panel of prominent journalists and First Amendment scholars who tackled digital privacy, state surveillance and the First Amendment rights of journalists.

Read more in the press release from the Knight Foundation.

Past Events

Glenn Greenwald Speaks | Join the Tow Center for an #AfterSnowden Talk in San Francisco on June 18, 2014

6

Join the Tow Center for an evening lecture with Glenn Greenwald, who will discuss the state of journalism today and his recent reporting on surveillance and national security issues, on June 18, 2014 at 7pm at the Nourse Theater in San Francisco.

In April 2014, Greenwald and his colleagues at the Guardian received the Pulitzer Prize for Public Service. Don’t miss Greenwald speak in-person as he fits all the pieces together, recounting his high-intensity eleven-day trip to Hong Kong, examining the broader implications of the surveillance detailed in his reporting, and revealing fresh information on the NSA’s unprecedented abuse of power with never-before-seen documents entrusted to him by Snowden himself.  Sponsored by: Haymarket Books, Center for Economic Research and Social Change, Glaser Progress Foundation, Tow Center for Digital Journalism – Columbia Journalism School, reserve your seat for Glenn Greenwald Speaks / Edward Snowden, the NSA, and the U.S. Surveillance State.

Please note: this is a ticketed event. Tickets are $4.75 each.  | Purchase Tickets

This event is part of Journalism After Snowden, a yearlong series of events, research projects and writing from the Tow Center for Digital Journalism in collaboration with the Columbia Journalism Review. For updates on Journalism After Snowden, follow the Tow Center on Twitter @TowCenter #AfterSnowden.

Journalism After Snowden is funded by The Tow Foundation and the John S. and James L. Knight Foundation.

Lauren Mack is the Research Associate at the Tow Center. Follow her on Twitter @lmack.

Past Events

Tow Center Launches Amateur Footage: A Global Study of User-Generated Content in TV and Online News Output

10

Crediting is rare, there’s a huge gulf in how senior managers and newsdesks talk about it and there’s a significant reliance on news agencies for discovery and verification. These are some of the key takeaways of Amateur Footage: A Global Study of User-Generated Content in TV and Online News Output published today by the Tow Center of Digital Journalism.

 

The aim of this research project was to provide the first comprehensive report about the use of user-generated content (UGC) among broadcast news channels. UGC being – for this report – photographs and videos captured by people unrelated to the newsroom, who would not describe themselves as professional journalists.

 

Some of the Principle Findings are:

  • UGC is used by news organizations daily and can produce stories that otherwise would not, or could not, be told. However, it is often used only when other imagery is not available. 40% of UGC on television was related to Syria.
  • There is a significant reliance on news agencies in terms of discovering and verifying UGC. The news agencies have different practices and standards in terms of how they work with UGC.
  • News organizations are poor at acknowledging when they are using UGC and worse at crediting the individuals responsible for capturing it. Our data showed that: 72 percent of UGC was not labeled or described as UGC and just 16 percent of UGC on TV had an onscreen credit.
  • News managers are often unaware of the complexities involved in the everyday work of discovering, verifying, and clearing rights for UGC. Consequently, staff in many newsrooms do not receive the training and support required to develop these skills.
  • Vicarious trauma is a real issue for journalists working with UGC every day – and it’s different from traditional newsroom trauma. Some newsrooms are aware of this – but many have no structured approach or policy in place to deal with it.
  • There is a fear amongst rights managers in newsrooms that a legal case could seriously impact the use of UGC by news organisations in the future

 

This research was designed to answer two key questions.  First, when and how is UGC used by broadcast news organizations, on air as well as online?  Second, does the integration of UGC into output cause any particular issues for news organizations? What are those issues and how do newsrooms handle them?

 

The work was completed in two phases. The first involved an in-depth, quantitative content analysis examining when and how eight international news broadcasters use UGC.  1,164 hours of TV output and 2,254 Web pages were analyzed here. The second was entirely qualitative and saw the team interview 64 news managers, editors, and journalists from 38 news organizations based in 24 countries across five continents. This report takes both phases to provide a detailed overview of the key findings.

 

The research provides the first concrete figures we have about the level of reliance on UGC by international news channels. It also explores six key issues that newsrooms face in terms of UGC. The report is designed around those six issues, meaning you can dip into any one particular issue:

1) Workflow – how is UGC discovered and verified? Do newsrooms do this themselves, and if so, which desk is responsible? Or is UGC ‘outsourced’ to news agencies?

2) Verification – are there systematic processes for verifying UGC? Is there a threshold that has to be reached before a piece of content can be used?

3) Permissions – how do newsrooms seek permissions? Do newsrooms understand the copyright implications around UGC?

4) Crediting – do newsrooms credit UGC?

5) Labeling – are newsrooms transparent about the types of UGC that they use in terms of who uploaded the UGC and whether they have a specific agenda?

6) Ethics and Responsibilities – how do newsrooms consider their responsibilities to uploaders, the audience and their own staff?

 

The full report can be viewed here.

Events

Upcoming Events: Journalism After Snowden

0

Journalism After Snowden
Finding and Protecting Intelligence Sources After Snowden

A lecture with James Bamford
Tuesday, December 2nd
12:00 pm – 1:30 pm
Yale Law School

RSVP via Eventbrite

James Bamford is an American bestselling author, journalist, and producer widely noted for his writing about the United States intelligence agencies, especially the highly secretive National Security Agency.  The New York Times has called him “the nation’s premier journalist on the subject of the National Security Agency.”  And in a lengthy profile, The New Yorker referred to him as “the NSA’s chief chronicler.”  His most recent book, The Shadow Factory: The Ultra-Secret NSA From 9/11 to The Eavesdropping on America, became a New York Times bestseller and was named by The Washington Post as one of “The Best Books of the Year.” It is the third in a trilogy by Mr. Bamford on the NSA, following The Puzzle Palace (1982) and Body of Secrets (2001), also New York Times bestsellers. (more…)