Past Events

Ta-Nehisi Coates at the Tow Center

0

By Alexandria Neason

Two weeks ago, students, alumni, faculty, and others packed the lecture hall at Columbia Journalism School to see The Atlantic’s Ta-Nehisi Coates, perhaps America’s foremost writer on race, speak about the media’s often contentious relationship with it.  Lately, the news has been more saturated with conversations about race than I can ever remember. Coverage of the policing and killing of black boys and men, of the burden of raising black children, of reparations and so-called “post-racial” multiculturalism has brought to the mainstream what people of color have long known.  America (still) has a race problem.

Photo by Rhon Flatts
Photo by Rhon Flatts

When I think about the history of American journalistic writing on race, it is difficult to separate the writing from the activism that it often highlights.  It’s hard to imagine white, Northern journalists traveling to the unapologetically violent 1960’s era South without some fledging belief that what was happening was wrong. It is hard to imagine that a journalist could cover the killings of Sean Bell, of Trayvon Martin and Oscar Grant and Jordan Davis and Mike Brown and all the others – without an understanding that something, somewhere, is deeply wrong.

I’d come to hear Coates speak because I wanted to know how he did it.  I wanted to know how he confronted the on-going legacy of American racism every day on his blog, subject to anonymous “keyboard commanders” as he referred to them.  I wanted to know how he dealt with what I can only assume were vile emails in response to his June cover story on the case for African-American reparations. I wanted to know how he wrote about racist housing policies and the constant loss of young, black life, without becoming disempowered by lack of change.  I’d come to hear how he kept going even when the idealisms of journalism – to affect change – proved elusive.

Coates talked at length about how he got his start in journalism. He spoke about dropping out of Howard University, and about his disinterest in romanticizing that.  He spoke about the importance of learning to report and to write.  He spoke of the difference between the two.  He spoke about waking up with aching questions and going to bed still bothered by them.  He spoke about reading, about writing constantly (even if it’s bad) as a means of practicing.  He talked about the absolute need to practice.  He told us that writing needs to be a top priority, below only family and health, if we hope to make a career out of it. He told us that if we didn’t love it, to leave.  “It’s just too hard,” he said.

And then he contradicted much of what I’d been taught about journalism. He told us not to expect to change anything with our writing.

I was startled. Wasn’t that the point? To educate, to inform, and ultimately, to change?

No. Coates doesn’t write to change the world. He doesn’t write to change the minds of white people (and he warned both white writers and writers of color of the dangers in doing this). “Don’t write to convince white people,” he said. I found in that statement what is perhaps the advice I needed in order to keep writing.

For a black man, writing about race in a country hard-pressed to ignore its long marriage to it, and doing so with precision and integrity and without apology is an act of defiance in and of itself.  Writing to speak, unburdening oneself of the responsibility of educating your opponents (and, in doing so, inadvertently educating a great deal of people), is how you keep touching on the untouchable subjects.

After the lecture, a small group of students gathered with Coates in the Brown Institute for a writing workshop sponsored by the Tow Center for Digital Journalism.  We were treated to a detailed walkthrough of Coates explosive June cover story on reparations for African-American descendants of slaves.  We learned about how he began his research, how he stayed organized, how he developed his argument and how it evolved over the year and a half that he worked on the piece.  It was clear that he was quite proud of it, not because he changed the minds of readers or because it had drawn so much attention to an issue often brushed off as impossible, but because he’d buried himself in the research, because he’d found a way to put a living, human face on the after-effects of policies that we often discuss as though they have none.  The piece is heavily reliant on data, but littered with human faces, human stories, human consequences.  It is deeply moving.  To me, it was convincing. Overwhelmingly, undeniably convincing.  And yet, his motivation was not to convince me, or anyone else, of anything.  He wrote to speak.

And speak he did.

Alexandria Neason is an education writer for the Teacher Project at Columbia Journalism School and a graduate of the M.S. in Journalism in 2014.

Research

Sensor Journalism: Communities of Practice

0

As seen in the Tow Center’s Sensors & Journalism report, the growing availability and affordability of low-cost, low-power sensor tools enables journalists and publics to collect and report on environmental data, irrespective of government agency agendas. The issues that sensor tools already help measure are manifold, i.e. noise levels, temperature, barometric pressure, water contaminants, air pollution, radiation, and more. When aligned with journalistic inquiry, sensors can serve as useful tools to generate data to contrast with existing environmental data or provide data where previously none existed. While there are certainly various types of sensor journalism projects with different objectives and outcomes, the extant case studies (as outlined in the Tow report) provide a framework to model forthcoming projects after.

But it may not be enough to simply identify examples of this work.

Invariably, just as important as building a framework for sensor journalism is building a community of practice for it, one that brings together key players to provide a space for asking critical questions​, sharing best practices,​ and fomenting connections/collaborations. Journalism certainly doesn’t happen in a vacuum; it is best served by collaborations with and connections to outside sources. A sensor journalism community has already begun to emerge on a grassroots level, spanning multiple countries, disciplines, and issues of concern. We might look to the  nodes in​ this community to ​outline ​a protean map of stakeholders in the field:

Journalists.

Since public opinion can be driven by press coverage, and since storytelling is a central part of news, journalists, documentarians, and media makers with an interest in sensor data play an important role in shaping how the public perceives certain issues. More than that, the media also have the ability to highlight issues that may have slipped under the radar of policymakers. In this scenario sensor data could potentially serve as evidence for or against a policy decision. Most sensor journalism case studies so far have relied on normative forms (print, online) to convey the data and the story, but there is much room for experimentation, e.g. sensor-based documentary, radio, interactive documentary, data visualization, and more.

Educators.

In the classroom, there is an undeniable opportunity to cultivate a generation of journalists and media makers who are unintimidated by hardware and technology. Not only this — the classroom also becomes an ideal place to test technology without being beholden to the same restrictions or liabilities as professional newsrooms. Educators across the U.S. have begun incorporating DIY sensors into classroom projects (see Emerson College, Florida International University, and San Diego State University projects), the results of which touch on many of the same questions that professional journalists encounter when it comes to sensor tools. The teaching practices applied to sensor journalism can also be the foundations of training models for professional journalists and civic groups seeking to investigate issues.

Hardware developers.

Because hardware developers design and build the tools that journalists and others would potentially be using, they have a stake in terms of how the tool performs downstream of development. Journalists can also collaborate with hardware developers in identifying tools that would be most helpful: Journalists may have specific requirements of data accuracy, data resolution, range of measurement, or the maturity of their equipment. Likewise, hardware experts can recommend tools that provide access to raw data and transparent interpretation algorithms. On the flip side, some hardware developers, particularly in the open source community, may help identify potential environmental issues of concern that then inform journalists’ research. Recently, a conversation about certification of sensors, which originated within the hardware development community, crystallized around the notion of how to instantiate trust in open sensor tools (or sensors in general) when used for various purposes, journalism included.This is telling of how an open dialogue between hardware developers and journalists might be beneficial to defining these initial collaborative relationships.

Researchers.

Since using sensor tools and data in journalism is new, there is still significant research to be done around the effectiveness of such projects from both a scientific/technological standpoint as well as one of media engagement and impact. Researchers are also best poised, within academia, to examine tensions around data quality/accuracy, sensor calibration, collaborative models, etc. and help provide critical feedback on this new media practice.

Data scientists, statisticians.

Since most journalists aren’t data scientists or statisticians by training, collaborations with data scientists and statisticians have been and should be explored to ensure quality analysis. While some sensor journalism projects are more illustrative and don’t rely heavily on data accuracy, others that aim to affect policy are more partial to such considerations. Journalists working with statisticians to qualify data could contribute toward more defensible statements and potentially policy decisions.

Activists, advocates.

Because many open sensor tools have been developed and deployed on the grassroots level, and because there is a need to address alternative sources of data (sources that are not proprietary and closed off to the public), activists play a key role in the sensor journalism landscape. Journalists can sometimes become aware of issues from concerned citizens (like the Sun Sentinel’s “Above the Law” series about speeding cops); therefore, it’s essential to cultivate a space in which similarly concerned citizens can voice and discuss concerns that may need further investigation.

Urban designers, city planners, architects.

Many cities already have sensor networks embedded within them. Some of the data from these sensors are proprietary, but some data are publicly accessible. Urban designers, city planners, and architects look to data for context on how to design and build. For instance, the MIT SENSEable City Lab is a conglomerate of researchers who often look to sensor data to study the built environment. Sensor data about environmental factors or flow can help inform city design and planning decisions. Journalists or media makers can play a role in completing the feedback loop — communicating sensor data to the public as well as highlighting public opinions and reactions to city planning projects or initiatives.

Internet of Things.

Those working in the Internet of Things space approach sensor networks on a different level. IoT endeavors to build an infrastructure that includes sensors in almost everything so that devices can interact better with people and with each other. At the same time, IoT infrastructures are still in development and the field is just beginning to lay its groundwork in the public consciousness. Imagine motion sensors at the threshold of your house that signal to a network that you’re home, which then turns on the devices that you most commonly use so that they’re ready for you. Now imagine that on on a neighborhood or city scale. Chicago’s Array of Things project aims to equip the city with environmental sensors that can report back data in real time, informing residents and the city government about various aspects of the city’s performance. What if journalists could have access to this data and serve as part of a feedback loop back to the public?


By no means is this a complete map of the sensor journalism community. ​One would hope that the network of interested parties in sensor journalism continues to expand and include others — within policy, legacy news organizations, and more — such that the discourse generated by it is a representative one that can both challenge and unite the field. Different methodologies of collecting data with sensors involve different forms of agency. In some sensor journalism scenarios, the agents are journalists; in others, the agents are members of the public; and in others yet, the agents can be governments or private companies. Ultimately, who collects the data affects data collection methods, analysis of the data, and accessibility of the data. No matter what tools are used  — whether they are sensors or otherwise — the issues that journalists seek to examine and illuminate are ones that affect many, and on multiple dimensions (individual, local, national, global). If we are truly talking about solving world problems, then the conversation should not be limited to just a few. Instead, it will take an omnibus of talent and problem solving from various disciplines to pull it off.

References

Pitt, Sensors and Journalism, Tow Center for Digital Journalism, May 2014.

Chicago’s Array of Things project

Sun Sentinel’s “Above the Law” series

Past Events

Journalism After Snowden Lecture Series

0

Journalism After Snowden Lecture Series

Presented by: Columbia University Graduate School of Journalism and the Information Society Project at Yale Law School

 

September 29, 2014

6:00pm-7:30pm, with reception to follow

Brown Institute for Media Innovation at Columbia University Graduate School of Journalism

 

Source Protection: Rescuing a Privilege Under Attack

Speaker: David A. Schulz, Partner at Levine Sullivan Koch & Schulz, LLP

Moderator: Emily Bell, Director of the Tow Center for Digital Journalism

 

Watch Full Lecture 

David Schulz | Outside Counsel to The Guardian; Lecturer, Columbia Law School; Partner, Levine Sullivan Koch & Schulz LLP | @LSKSDave
David Schulz heads the New York office of Levine Sullivan Koch & Schulz, L.L.P. a leading media law firm with a national practice focused exclusively on the representation of news and entertainment organizations in defamation, privacy, newsgathering, access, copyright, trademark and related First Amendment matters. Schulz has been defending the rights of journalists and news organizations for nearly 30 years, litigating in the trial courts of more than 20 states, and regularly representing news organizations on appeals before both state and federal tribunals. Schulz successfully prosecuted access litigation by the Hartford Courant to compel the disclosure of sealed dockets in cases being secretly litigated in Connecticut’s state courts, and the challenge by 17 media organizations to the closure of jury selection in the Martha Stewart criminal prosecution. He successfully defended against invasion of privacy claims brought by Navy SEALS whose photos with injured Iraqi prisoners were discovered on-line by a reporter, and has prevailed in Freedom of Information Act litigation pursued by the Associated Press to compel the release of files relating to detainees held by the Department of Defense at Guantanamo Bay and to records of the military service of President George W. Bush. Schulz is described as an “incredibly skilled” litigation strategist and a “walking encyclopedia” of media law by Chambers USA (Chambers & Partners, 2006), and is recognized as one of the nation’s premier First Amendment lawyers by The Best Lawyers in America (Woodward/White, 2006). He regularly represents a broad range of media clients, including The New York Times, Associated Press, CBS Broadcasting, Tribune Company, and The Hearst Corporation, along with other national and local newspapers, television networks and station owners, cable news networks, and Internet content providers. Schulz is the author of numerous articles and reports, including Policing Privacy, 2007 MLRC Bulletin 25 (September 2007); Judicial Regulation of the Press? Revisiting the Limited Jurisdiction of Federal Courts and the Scope of Constitutional Protection for Newsgathering, 2002 MLRC Bulletin 121 (April 2002); Newsgathering as a Protected Activity, in Freedom of Information and Freedom of Expression: Essays in Honour of Sir David William (J. Beatson & Y. Cripps eds., Oxford University Press 2000); and Tortious Interference: The Limits of Common Law Liability for Newsgathering, 4 Wm. & Mary Law Bill Rts. J. 1027 (1996) (with S. Baron and H. Lane). He received a B.A. from Knox College in Galesburg, Illinois, where he has served for more than twenty years on the Board of Trustees. He received his law degree from Yale Law School, and holds a master’s degree in economics from Yale University.

 

 

Events, Past Events

The Tow Responsive Cities Initiative

0

The Tow Responsive Cities Initiative
Workshop with Susan Crawford
Friday, 10/31 – 9:00 am

By invitation only

The extension of fiber optic high-speed Internet access connections across cities in America could provide an opportunity to remake democratic engagement over the next decade. Cities would have the chance to use this transformative communications capacity to increase their responsiveness to constituents, making engagement a two-way, nuanced, meaningful part of what a city does. The political capital that this responsiveness would generate could then be allocated to support big ideas that could address the problems facing many American cities, including growing inequality, diminishing quality of life, and movement of jobs outside the city’s borders.

Announcements, Past Events, Research

Sensors and Certification

0

This is a guest post from Lily Bui, a sensor journalism researcher from MIT’s Comparative Media Studies program.

On October 20, 2014, Creative Commons Science convened a workshop involving open hardware/software developers, lawyers, funders, researchers, entrepreneurs, and grassroots science activists around a discussion about the certification of open sensors.

To clarify some terminology, a sensor can either be closed or open. Whereas closed technologies are constrained by an explicitly stated intended use and design (e.g., an arsenic sensor you buy at Home Depot), open technologies are intended for modification and not restricted to a particular use or environment (e.g., a sensor you can build at home based on a schematic you find online).

Over the course of the workshop, attendees listened to sessions led by practitioners who are actively thinking about whether and how a certification process for open hardware might mitigate some of the tensions that have arisen within the field, namely around the reliability of open sensor tools and the current challenges of open licensing. As we may gather from the Tow Center’s Sensors and Journalism report, these tensions become especially relevant to newsrooms thinking of adapting open sensors for collecting data in support of journalistic inquiry. Anxieties about data provenance, sensor calibration, and best practices on reporting sensor data also permeate this discussion. This workshop provided a space to begin articulating the needs required for sensor journalism to move forward.

Below, I’ve highlighted the key points of discussion around open sensor certification, especially as they relate to the evolution of sensor journalism.

Challenges of Open Sensors

How, when, and why do we trust a sensor? For example, when we use a thermometer, do we think about how well or often it has been tested, who manufactured it, or what standards were used to calibrate it? Most of the time, the answer is no. The division of labor that brings the thermometer to you is mostly invisible, yet you inherently trust that the reading it gives is an accurate reflection of what you seek to measure. So, what is it that instantiates this automatic trust, and what needs to happen around open sensors for people to likewise have confidence in them?

At the workshop, Sonaar Luthra of Water Canary led a session about the complexities and challenges that accompany open sensors today. Most concerns revolve around accuracy, both of the sensor itself and the data it produces. One reason for this is that the manufacture and integration of sensors are separate processes (that is to say, for example, InvenSense manufactures an accelerometer and Apple integrates it into the iPhone). Similarly, within the open source community, the development and design of sensors and their software are often separate processes from an end user’s assembly—a person looks up the open schematic online, buys the necessary parts, and builds it at home. This division of labor erodes the boundaries between hardware, software, and data, obviating a need to recast how trust is established in sensor-based data.

For journalists, a chief concern around sensor data is ensuring, with some degree of confidence, that the data collected from the sensor is not erroneous and won’t add misinformation to the public sphere if published. Of course, this entirely depends on how and why the sensor is being used. If we think of accuracy as a continuum, then the degree of accuracy can vary depending on the context. If the intent is to gather a lot of data and look at general trends—as was the case with the Air Quality Egg, an open sensor that measures air quality—point-by-point accuracy is less of a concern when engagement is the end goal. However, different purposes and paradigms require different metrics. In the case of StreetBump, a mobile app that uses accelerometer data to help identify potential potholes, accuracy is a much more salient issue as direct intervention from the city would mean allocating resources and labor toward location data a sensor suggests. Thus, creating a model to work toward shared parameters, metrics, resources, and methods might be useful to generate consensus and alleviate factors that threaten data integrity.

There may also be alternative methods for verification and accounting for known biases in sensor data. Ushahidi’s Crowdmap is an open platform used internationally to crowdsource crisis information. The reports depend on a verification system from other users for an assessment of accuracy. One can imagine a similar system for sensor data, pre-publication or even in real time. Also, if a sensor has a known bias in a certain direction, it’s also possible to compare data against an established standard (e.g., EPA data) and account for the bias in reporting on the data.

To further investigate these questions, we can look toward extant models of verification in open science and technology communities. The Open Geospatial Consortium provides a way of thinking about interoperability among sensors, which requires that a consensus around standards or metrics be established. Alternatively, the Open Sensor Platform suggests ways of thinking about data acquisition, communication, and interpretation across various sensor platforms.

Challenges of Open Licensing for Sensors

A handful of licensing options exist for open hardware, including the CERN Open Hardware License, Open Compute Project License, and Solderpad License. Other intellectual property strategies include copyright (which can be easily circumvented and is sometimes questionable when it comes to circuits), patenting (which is difficult and costly to attain), and trademark (an option that offers a lower barrier to entry and would best meet the needs of open source approaches). However, the issue of whether or not formal licensing should be applied to open hardware is still questionable, as it would inevitably impose restrictions on a design or version of hardware that—within the realm of open source—is still susceptible to modification by the original developer or the open source community writ large. In other words, a licensing or certification process would transition what is now an ongoing project into a final product.

Also, in contrast to open software, wherein the use of open code is clearly demarcated and tracked by the process of copying and pasting, it is less clear at what point a user actually agrees to using open hardware (i.e., upon purchase or assembly, etc.) since designs often involve a multitude of components and are sometimes accompanied by companion software.

A few different approaches to assessing open sensors emerged during the workshop:

  1. Standards. A collaborative body establishes interoperable standards among open sensors, working for independent but overlapping efforts. (Targeted toward the sensor.)
  2. Certification/Licensing. A central body controls a standard, facilitates testing, and manages intellectual property. (Targeted toward the sensor.)
  3. Code of conduct. There exists a suggestion of uses and contexts for the sensor, i.e., how to use it and how not to use it. (Targeted toward people using the sensor.)
  4. Peer assessment. Self-defined communities test and provide feedback on sensors as seen in the Public Lab model. (Targeted toward the sensor but facilitated by people using it.)

In the case of journalism, methods of standardization would depend on how much (or little) granularity of data is necessary to effectively tell a story. In the long run, it may be that the means of assessing a sensor will be largely contextual, creating a need to develop a multiplicity of models for these methods.

Preliminary Conclusions

While there is certainly interest from newsrooms and individual journalists in engaging with sensor tools as a valid means for collecting data about their environments, it is not yet apparent what newsrooms and journalists expect from open sensors and for which contexts open sensor data is most appropriate. The products of this workshop are relevant to evaluating what standards—if any—might be necessary to establish before sensors can be more widely adapted into newsrooms.

In the future, we must keep some important questions in mind: What matters most to newsrooms and journalists when it comes to trusting, selecting, and using a sensor tool for reporting? Which sensor assessment models would be most useful, and in which context(s)?

With regard to the certification of open sensors, it would behoove all stakeholders—sensor journalists included—to determine a way to move the discourse forward.

References

  1. Pitt, Sensors and Journalism, Tow Center for Digital Journalism, May 2014.
  2. Bourbakis and A. Pantelopoulos, “A Survey on Wearable Sensor-based Systems for Health Monitoring and Prognosis,” Systems, Man, and Cybernetics, Part C: Applications and Reviews, Vol. 40, Iss. 1 (IEEE, Jan. 2010).
  3. Open Source Hardware Association (OSHWA), Definition page.

announcements-home, Events, Past Events

Recap: Source Protection in the Information Age

0

“Assert the right to report.” That was the mandate Columbia’s Sheila Coronel gave our group of journalists and online privacy and security advocates this past Saturday morning, kicking off a day full of panels and workshop activities on the theme of “Source Protection in the Information Age.” In this post-Snowden age, we were reminded,
as scrutiny from the government and other authority structures intensifies, simple source protection becomes something more. As Aaron Williamson put it succinctly in the morning’s first panel: “Using encryption is activism. It’s standing up for your right to keep communications private.”

How to be an effective activist then? The day’s emphasis was intensely practical: Know your tools. We each had the opportunity to cycle through 6 of 14 available workshops. The spread covered effectively the typical activities journalists engage in: research, communication and writing. That translated into focuses on encrypted messaging via chat and email, location anonymous browsing via Tor, and access to desktop tools like the portable Tails operating system, which enables journalists to securely develop and store their research and writing. Snowden used Tails himself to escape the NSA’s scrutiny. We also received timely reminders about creating secure passwords and remembering that third parties are aware of our every move online.

Throughout, we were reminded of an important fact: You’re only as strong as your weakest participant. So journalists need not only to embrace these tools, they also need to educate their sources in how to use them effectively. They also need to learn how to negotiate the appropriate means and levels of security for communication with sources.

That’s where the user experience of these tools becomes so important. The most successful tools are bound to be those which are quick to install and intuitive to use. If some of those tools were as easy to download and install as a browser or plugin (Tor, Ghostery), others involved complex steps and technical knowledge, which might intimidate some users. That fact underlines the need to apply user-centered design principles to these excellent tools if they’re to be universally adopted. We have to democratize access to them.

Another tension point was the concern that using secure tools actually draws attention to the individual. A valid fear, perhaps, but the answer isn’t to abandon the tools but to employ them more often, even when security isn’t a concern. Increase the signal to noise ratio. On that note, the day was a success. Many of us, who were more or less aware of this issue, left not just enriched with more knowledge, but with laptops sporting a few more tools to empower us as activists.

Robert Stribley is the Associate Experience Director at Razorfish. You can follow him on Twitter at @stribs.

For resources and information about this and future events, visit our Source Protection: Resources page, and follow organizers/hosts Sandy Ordonez, Susan McGregor, Lorenzo Francesi-Biccherai and the Tow Center on Twitter.

announcements-home, Events, Past Events, Tips & Tutorials

Source Protection: Resources

2
We are happy to report that many of the attendees of our October 11 workshop on Source Protection in the Information Age left with a good foundation in digital security, and trainers gained a better understanding of the challenges journalists face in becoming more secure. 
This was a collaboratively organized event that brought together organizations and individuals passionate about the safety and security of journalists. We remain committed to continue supporting this collaboration, and will be planning future workshops. 
If you weren’t able to attend the event, we recommend starting with this brief recap. In addition, we would like to share some resources that you may find useful for continuing to develop your skills and understandings in this area.
Enjoy!
The organizers
(Lorenzo, Susan, Sandy & George)

Workshop Panel Videos

Panel 1: How technology and the law put your information at risk

Runa Sandvik, James Vasile, Aaron Williamson | Moderated by Jenn Henrichsen

Panel 2: Source protection in the real world – how journalists make it work

Online Resources

Workshop Resources

Online Library

Tactical Tech Collective

Tactical Tech’s Privacy & Expression program builds digital security awareness and skills of independent journalists, and anyone else who is concerned about the security risks and vulnerabilities of digital tools. On their website you can find manuals, short films, interactive exercises and well designed how-to’s. 

Upcoming Privacy & Security Events

October 20 | 6:30pm | Tracked Online:  How its done and how you can protect yourself
Techno-Activism 3rd Mondays (TA3M) is a community-run monthly meetup that happens in 21 cities throughout the world. It is a good place to meet and learn from individuals that work on anti-surveillance and anti-censorship issues. The October edition of NYC TA3M will feature former product lead of Ghostery who will explain how 3rd parties track you online, what information they collect, and what you can do to protect yourself. If you would like to be alerted of upcoming TA3m events, contact Sandra Ordonez @ sandraordonez@openitp.org
RSVP: 

Circumvention Tech Festival

The Circumvention Tech Festival will occur on March 1-6 in Valencia, Spain. The festival gathers the community fighting censorship and surveillance for a week of conferences, workshops, hackathons, and social gatherings, featuring many of the Internet Freedom community’s flagship events. This includes a full day of journo security events, which will be conducted both in English and Spanish. This is a great opportunity to meet the digital security pioneers. 
RSVP: 

 

Research

The New Global Journalism: New Tow Center Report

2
Throughout the twentieth century the core proposition of foreign correspondence was to bear witness—to go places where the audience couldn’t and report back on what occurred.Three interrelated trends now challenge this tradition. First, citizens living through events can tell the world about them directly via a range of digital technologies. Second, journalists themselves have the ability to report on some events, particularly breaking news, without physically being there.  Finally, the financial pressures that digital technology have brought to legacy news media have forced many to close their international bureaus.In this age of post-legacy media, local reporters, activists, ordinary citizens—and traditional foreign correspondents—are all now using digital technologies to inform the world of breaking news, and to offer analysis and opinions on global trends. These important changes are documented in the Tow Center’s report The New Global Journalism: Foreign Correspondence in Transition.The report’s authors include Kelly Golnoush Niknejad, founder of Tehran Bureau, a digital native site based in London and reporting on Iran, using dispatches from correspondents both inside and outside the country. Anup Kaphle, digital foreign editor of The Washington Post, profiles the legacy foreign desk in transition, as it seeks to merge the work of traditional correspondents and the contributions of new digital reporters, who may never leave the newsroom as they write about faraway events.

These new practices require new skills, and Ahmed Al Omran, a Wall Street Journal correspondent in Saudi Arabia, walks reporters through eight tactics that will improve their use of digital tools to find and verify information on international stories. Internet security issues raised by this work is discussed in another chapter, by Burcu Baykurt, PhD candidate at Columbia Journalism School, who also examines how Internet governance affects journalists and others using the web to disseminate information.

And Jessie Graham, a former public radio reporter who is now a multimedia producer for Human Rights Watch, describes the shifting line between advocacy and journalism. Some of the journalists affected by closings of foreign bureaus have made comfortable transitions to jobs in advocacy organizations — while those same organizations have increasingly developed their media skills to communicate directly with audiences, without the filter of mainstream media.

Through practical guidance and descriptions of this changing journalistic ecosystem, the Tow Center hopes that The New Global Journalism can help define a new, hybrid foreign correspondent model—not a correspondent who can do everything, but one open to using all reporting tools and a wide range of sources to bring audiences a better understanding of the world.

 

Download the Full Report Here: The New Global Journalism

Chapters:

Being There: The Virtual Eyewitness, By Kelly Golnoush Niknejad

The Foreign Desk in Transition: A Hybrid Approach to Reporting From There—and Here, By Anup Kaphle

A Toolkit: Eight Tactics for the Digital Foreign Correspondent, By Ahmed Al Omran

David Versus Goliath: Digital Resources for Expanded Reporting—and Censoring,  By Burcu Baykurt

A Professional Kinship? Blurring the Lines Between International Journalism and Advocacy, By Jessie Graham

Edited By: Ann Cooper and Taylor Owen

Research

Sensors and Journalism: ProPublica, Satellites and The Shrinking Louisiana Coast

2

Two months before the programmers journalists at ProPublica would be ready to go live with an impressive news app illustrating the huge loss of land along the Louisiana coast line, the development team gathered in their conference room above the financial district in Manhattan.

This was partly a show-off session and partly a review. Journalist-developers Brian Jacobs and Al Shaw pulled a browser up onto the glossy 46-inch screen and loaded up their latest designs. At first it appears to be simple satellite photography, spanning about 20,000 square miles, but the elegance hides a complicated process to pull layers of meaning from many rich data sets.

At the heart of the story is the fact that the Louisiana coastline loses land at a rate equivalent to a football field each hour. That comes to 16 square miles per year. The land south of New Orleans has always been low-lying, but since the Army Corps of Engineers built levees along the Mississippi after the huge 1927 floods, the delta has been losing ground. Previously, the river carried sediment down and deposited it to gradually build up dry land throughout the delta. The same levees that protect upstream communities also block that sediment from reaching the upstream river and floating down to become Louisiana coastline. Environmental researchers say that the energy industry’s canal-dredging and well-drilling have accelerated natural erosion. Together, the constricted river and the oil extraction have exacerbated the effect of sea level rises from climate change.

The loss of ground endangers people: The dry land used to provide protection to New Orleans’ people and businesses, because when storms like Hurricane Katrina sweep in from the Gulf Coast, they lose power as they move from the water to land. It’s therefore crucial to have a wide buffer between the sea and the city. Now, with 2,000 fewer acres of protective land, the state will have to spend more money building tougher, higher walls, flood insurance will be more costly, infrastructure could break and the people inside those walls risk death and injury at much higher rates. If the land-loss isn’t slowed the costs will get higher.

Satellites Clearly Show The Story

For this story, Al Shaw’s goal was to illustrate the scale and severity of the problem. Print journalists have written extensively on the story. But the forty years worth of remote sensing data available from NASA’s Landsat satellites helped the ProPublica journalists to show the story with immediate power and clarity. They processed Landsat 8 sensing data themselves and drew on the US Geological Survey’s interpretations of data from earlier Landsat craft.

The project combines a high-level view with eight zoomed-in case studies. The scene of land, marsh and water known locally as the Texaco Canals forms one of the most dramatic examples. Starting with data collected from aerial photography in the 1950s and ending with 2012 satellite data, the layered maps show how the canals sliced up the marshlands and the relocated soil stopped sediments replenishing the land. The result is an area that starts mostly as land, and ends mostly as open water. Contemporary and archival photos complement the birds-eye view with a human level perspective.

This is Satellite Sensing’s Learning Curve

At this point, we need to reveal a conflict of interest. In February 2014 The Tow Center provided training to four journalists from ProPublica. Lela Prashad, a remote sensing specialist who has worked with NASA led a two day workshop covering the fundamental physics of satellite sensing, a briefing on the different satellite types and qualities, where to find satellite data and the basics of how to process it. ProPublica news apps director Scott Klein had been at a Tow Center journalistic sensing conference eight months earlier to see a presentation by Arlene Ducao and Ilias Koen on their satellite infra-red maps of Jakarta and saw that ProPublica’s innovative newsroom might be able to use remote sensing to cover some of their environmental stories in new ways.

The ProPublica journalists, to produce this work, learnt the physics and applications of remote sensing technology. The earth’s surface pushes energy out into the atmosphere and space – some is an immediate reflection of the sun’s rays, some is energy absorbed earlier. Human sources like city lights and industrial activity also produce emissions. Energy waves range from the high-frequency, short wavelength gamma-rays and x-rays, through ultra-violet then into the visible spectrum (what human eyes sense) and on towards the longer wavelengths of infra-red, microwave and radio.
Satellites flown by NASA (and increasing numbers of private companies) point cameras towards Earth taking pictures of the energy which passes through the atmosphere. Those are the ultraviolet, visible and infrared bands. (The various generations of satellites have had different capabilities. As they develop, they have recorded Earth with more detail and pass overhead more frequently.)
Those scenes, when processed, can reveal with great accuracy the materials that form Earth’s surface. The exact hue of each pixel represents the specific substance below. Geologists needing to identify types of rock take advantage of the fact that, for example, sandstone reflects a different combination of energy waves than granite. Food security analysts can assess the moisture, and therefore the health, of a country’s wheat crop – helping them predict shortages (and speculators predict pricing). ProPublica is showing the Louisiana coast-line change over time from dry land, to marsh, to open water.

The US Geological Survey (USGS) makes its data available through a series of free online catalogues. Registered users can nominate the area they are interested in, pick a series of dates and download image files which include all the available energy bands. Crucially, those image files include the Geographic Information Systems (GIS) meta-data that allow the journalists to precisely match the pixels in data files to known locations.

 

How The Developers Built it

Brian Jacobs learned how to to reproduce and combine the information in an accessible form for ProPublica’s online audience. The opening scene of the app has eight layers. The top one uses scanned copy of a 1922 survey map owned by the USGS and scanned by the Louisiana State University library. Jacobs pulled it into his mapping software to match the geographic features with GIS location data and used photoshop to prepare it for online display; cutting out the water and normalizing the color.

The bottom layer displays the 2014 coastline – stitched together from six Landsat 8 tiles, including many steps of processing. Jacobs picked out images from satellite passes when the skies were free from cloud cover. After pulling in the image tiles from the infrared and true-color bands and merging them together, Jacobs normalized the distortions and color differences so the separate images would mosaic consistently.
Working with the command-line tools, GDAL (a geospatial library) and ImageMagick (an image editing suite), he prepared them for online display. Pictures of the Earth’s curved surface need to be slightly warped to make sense on a flat images, the types of warps are called projections. The raw USGS images come in the space industry’s WGS84 projection standard, but the web mostly uses Mercator. (Here’s Wikipedia‘s explanation, and xkcd’s cartoon version.)

Researchers who work with remote sensing have a specific language and sets of practices for how they treat color in their visualizations. The ProPublica journalists adopted some of those practices, but also needed to produce their work for a lay audience. So, although the features on ProPublica’s maps are easily recognizable, they are not what’s known as ‘true color’. When viewers look closely at the bottom layer, it’s clear that these are not simply aerial photographs. In comparison to satellite photography displayed via Google maps, the ProPublica layer has a much sharper contrast between land and water. The green pixels showing land are vibrant, while the blue sections showing water are rich, deep blues.

The color palette is, in fact, a combination of two sets of satellite data: The water pixels are interpreted from Landsat’s infrared and green bands, while the land pixels come from Landsat ‘true color’ red, green and blue bands, with extra sharpening from the panchromatic band (panchromatic appears as shades of gray, but can be interpreted to color). At 30m/pixel, Landsat’s color bands are lower resolution than its 15m/pixel panchromatic band.

Step By Step Frames

This is a detail of a single tile, in a single band, but color-corrected.

A detail of a single tile in the true-color space, somewhat color-corrected.

At this point, the developers have stitched together multiple tiles of their area, and combined images from many wavelength bands, a process known as pansharpening.

At this point, the developers have stitched together multiple tiles of their area, and combined images from true-color and panchromatic bands, a process known as pansharpening.

The water mask

This is the mask that ProPublica produced from the near-infrared and green bands. It’s used to make a distinction between the areas of land and water.

Pansharpened, zoomed

This frame shows the final result of ProPublica’s satellite image processing. At this point the images have been pansharpened and the water layer has been included from the near IR and green band.

The final view that ProPublica showed their users.

This shows the underlay for ProPublica’s case studies. This land pixels combine true color bands and the high-resolution panchromatic band. The water pixels come from the infrared and green bands.

Google satellite view, zoomed

The same area, as shown in Google maps’ satellite view. Mostly, it uses true-color satellite imagery for land, and bathymetry data for water.

A detail of the USGS map. Each color represents a period of land loss. ProPublica extracted each period to separate layers in their interactive map.

A detail of a USGS map of the region. Each color represents a period of land loss. ProPublica extracted each the pixels of period to separate layers into an interactive map.

The other layers come from a range of sources. In the opening scene, viewers can bring up overlays of the human building associated with the oil and gas industry: including the wells and pipelines, the dredged canals and the levees that protect the homes and businesses around the coastline.

When users zoom in to one of ProPublica’s seven case studies, they can scrub through another 16 layers. Each one shows a slice of time when the coast line receded. A layer of olive green pixels indicates the remaining dry land. The data for these 16 layers came from researchers at the US Geological Survey (USGS) who had analyzed 37 years of satellite data combined with historical surveys and mid-century aerial photography. ProPublica’s worked with John Barras at the USGS, a specialist who could draw on years of his own work and decades of published studies. He handed over a large geo-referenced image file exported from the software suite ERDAS Imagine. Each period’s land loss was rendered in a separate color

The Amount of Time, Skill and Effort

Scott Klein described this project as one of ProPublica’s larger ones, but not abnormally so. His team of developer-journalists release around twelve projects of this size each year, as well as producing smaller pieces to accompany work by the rest of ProPublica’s newsroom.

For six months, the project was a major focus for Al Shaw and Brian Jacobs. Both Shaw and Jacobs are young, highly skilled and prized developer-journalists. Al Shaw has a BA, and is also highly active in the New York’s hacks/hackers community. Brian Jacobs is a Knight-Mozilla Fellow working at ProPublica, with a background that includes a year at MIT’s Senseable City Lab and four years as a UI designer at Azavea, a Philadelphia based geospatial software company. They worked on it close to full time, with oversight from their director Scott Klein. During the later stages, ProPublica’s design director David Sleight advised on the interaction design, hired a freelance illustrator and led user-testing. ProPublica partnered with The Lens, a non-profit public-interest newsroom based in New Orleans, whose environmental reporter Bob Marshall wrote the text. The Lens also sourced three freelance photo researchers and photographers for ProPublica.

ProPublica Have Shared Their Tools

To produce the work, ProPublica had to extend the ‘simple-tiles’ software library they use to publish maps – a process that soaked up months of developer time. They’ve now open-sourced that code – a move which can radically speed up the development process for other newsrooms who have skilled developers. In common with most news organizations, the interactive maps ProPublica has historically published have used vector graphics: which display as outlines of relatively simple geographic and city features like states, roads and building footprints. This project renders raster (aka bitmap) images, the kind of file used for complicated or very detailed visual information.

ProPublica’s explanation about their update to simple-tiles is available on their blog, and the code is available via github.

Their Launch

ProPublica put the app live on the 28th of August, exactly nine years after the Hurricane Katrina made New Orleans’ mayor order the city’s first ever mandatory evacuation.

Announcements, announcements-home, Events, Research

Upcoming Events

0

All-Class Lecture: The New Global Journalism

Tuesday, Sep. 30, 2014, 6:00pm

(Lecture Hall)

Based on a new report from the Tow Center, a panel discussion on how digital technology and social media have changed the work of journalists covering international events. #CJSACL

Panelists include report co-authors: 

Ahmed Al OmranSaudi Arabia correspondent at The Wall Street Journal

Burcu BaykurtPh.D. candidate in Communications at Columbia Journalism School

Jessie GrahamSenior Multimedia Producer at Human Rights Watch

Kelly Golnoush NiknejadEditor-in-Chief at Tehran Bureau

Program will be moderated by Dean of Academic Affairs, Sheila Coronel

Event begins at 6 PM

RSVP is requested at JSchoolRSVP@Columbia.edu

Announcements, announcements-home, Events

Upcoming Tow Event: Just Between You and Me?

1

Just between you and me?

(Pulitzer Hall – 3rd Floor Lecture Hall)

In the wake of the Snowden disclosures, digital privacy has become more than just a hot topic, especially for journalists. Join us for a conversation about surveillance, security and the ways in which “protecting your source” means something different today than it did just a few years ago. And, if you want to learn some practical, hands-on digital security skills—including tools and techniques relevant to all journalists, not just investigative reporters on the national security beat—stick around to find out what the Tow Center research fellows have in store for the semester.

The event will be held at 6 p.m. on Monday, August 25th in the 3rd Floor Lecture Hall of Pulitzer Hall. We welcome and encourage all interested students, faculty and staff to attend.

How It's Made, Research

Hyper-compensation: Ted Nelson and the impact of journalism

1

NewsLynx is a Tow Center research project and platform aimed at better understanding the impact of news. It is conducted by Tow Fellows Brian Abelson, Stijn DeBrouwere & Michael Keller.

“If you want to make an apple pie from scratch, you must first invent the universe.” — Carl Sagan

Before you can begin to measure impact, you need to first know who’s talking about you. While analytics platforms provide referrers, social media sites track reposts, and media monitoring tools follow mentions, these services are often incomplete and come with a price. Why is it that, on the internet — the most interconnected medium in history — tracking linkages between content is so difficult?

The simple answer is that the web wasn’t built to be *fully* connected, per se. It’s an idiosyncratic, labyrinthine garden of forking paths with no way to navigate from one page to pages that reference it.

We’ve spent the last few months thinking about and building an analytics platform called NewsLynx which aims to help newsrooms better capture the quantitative and qualitative effects of their work. Many of our features are aimed at giving newsrooms a better sense of who is talking about their work. This seemingly simple feature, to understand the links among web pages, has taken up the majority of our time. This obstacle turns out to be a shortcoming in the fundamental architecture of the web. But without it, however, the web might never have succeeded.

The creator of the web, Tim Berners Lee didn’t provide a means for contextual links in the specification for HTML. The world wide web wasn’t the only idea for networking computers, however. Over 50 years ago an early figure in computing had a different vision of the web – a vision that would have made the construction of NewsLynx a lot easier today, if not completely unnecessary.

Around 1960, a man named Ted Nelson came up with an idea for a structure of linking pieces of information in a two-way fashion. Whereas links on the web today just point one way — to the place you want to go — pages on Nelson’s internet would have a “What links here?” capability so would know all the websites that point to your page.

And if you were dreaming up the ideal information web, this structure makes complete sense: why not make the most connections possible? As Borges writes, “I thought of a labyrinth of labyrinths, of one sinuous spreading labyrinth that would encompass the past and the future and in some way involve the stars.”

Nelson called his project Xanadu, but it had the misfortune of being both extremely ahead of its time and incredibly late to the game. Project Xanadu’s first and somewhat cryptic release debuted this year: over 50 years after it was first conceived.

In the mean time, Berners-Lee put forward HTML with its one-way links, in the early 90s and it took off into what we know today. And one of the reasons for the web’s success is its extremely informal, ad-hoc functionality: anyone can put up an HTML page and without hooking into or caring about a more elaborate system. Compared to Xanadu, what we use today is the quick and dirty implementation of a potentially much richer and also much harder to maintain ecosystem.

Two-way linking would make not only impact research easier but also a number of other problems on the web. In his latest book “Who Owns the Future?”, Jaron Lanier discusses two-way linking as a potential solution to copyright infringement and a host of other web maladies. His logic is that if you could always know who is linking where, then you could create a system of micropayments to make sure authors get proper credit. His idea has its own caveats, but it shows the systems that two-way linking might enable. Chapter Seven of Lanier’s book discusses some of the other reasons Nelson’s idea never took off.

The desire for two-way links has not gone away, however. In fact, the *lack* of two-way links is an interesting lens through which to view the current tech environment. By creating a central server that catalogs and makes sense of the one-way web, Google’s adds value with its ability to make the internet seem more like Project Xanadu. If two-way links existed, you wouldn’t need all of the features of Google Analytics. People could implement their own search engines with their own page rank algorithms based on publicly available citation information.

The inefficiency of one-way links left a hole at the center of the web for a powerful player to step in and play librarian. As a result, if you want to know how your content lives online, you have to go shopping for analytics. To effectively monitor the life of an article, newsrooms currently use a host of services from trackbacks and Google Alerts to Twitter searches and ad hoc scanning. Short link services break web links even further. Instead of one canonical URL for a page, you can have a bit.ly, t.co, j.mp or thousands of other custom domains.

NewsLynx doesn’t have the power of Google. But, we have been working on a core feature that would leverage Google features and other two-way link surfacing techniques to make monitoring the life of an article much easier: we’re calling them “recipes”, for now (#branding suggestions welcome). In NewsLynx, you’ll add these “recipes” to the system and it will alert you of all pending mentions in one filterable display. If a citation is important, you can assign it to an article or onto your organization more generally. We also have a few built-in recipes to get you started.

We’re excited to get this tool into the hands of news sites and see how it helps them better understand their place in the world wide web. As we prepare to launch the platform in the next month or so, check back here for any updates.

Past Events

Why We Like Pinterest for Fieldwork: Research by Nikki Usher and Phil Howard

5

Nikki Usher, GWU

Phil Howard, UW and CEU

7/16/2014

Anyone tackling fieldwork these days can chose from a wide selection of digital tools to put in their methodological toolkit. Among the best of these tools are platforms that let you archive, analyze, and disseminate at the same time. It used to be that these were fairly distinct stages of research, especially for the most positivist among us. You came up with research questions, chose a field site, entered the field site, left the field site, analyzed your findings, got them published, and shared your research output with friends and colleagues.

 

But the post-positivist approach that many of us like involves adapting your research questions—reflexively and responsively—while doing fieldwork. Entering and leaving your field site is not a cool, clean and complete process. We analyze findings as we go, and involve our research subjects in the analysis. We publish, but often in journals or books that can’t reproduce the myriad digital artifacts that are meaningful in network ethnography. Actor network theory, activity theory, science and technology studies and several other modes of social and humanistic inquiry approach research as something that involves both people and devices. Moreover, the dissemination of work doesn’t have to be something that happens after publication or even at the end of a research plan.

 

Nikki’s work involves qualitative ethnographic work at field sites where research can last from five months to a brief week visit to a quick drop in day. She learned the hard way from her research for Making News at The New York Times that failing to find a good way to organize and capture images was a missed opportunity post-data collection. Since then, Nikki’s been using Pinterest for fieldwork image gathering quite a bit. Phil’s work on The Managed Citizen was set back when he lost two weeks of field notes on the chaotic floor of the Republican National Convention in 2000 (security incinerates all the detritus left by convention goers). He’s been digitizing field observations ever since.

 

Some people put together personal websites about their research journey. Some share over Twitter. And there are plenty of beta tools, open source or otherwise, that people play with. We’ve both enjoyed using Pinterest for our research projects. Here are some points on how we use it and why we like it.

 

How To Use It

  1. When you start, think of this as your research tool and your resource.   If you dedicate yourself to this as your primary archiving system for digital artifacts you are more likely to build it up over time. If you think of this as a social media publicity gimmick for your research, you’ll eventually lose interest and it is less likely to be useful for anyone else.
  2. Integrate it with your mobile phone because this amps up your capacity for portable, taggable, image data collection.
  3. Link the board posts to Twitter or your other social media feeds. Pinterest itself isn’t that lively a place for researchers yet. The people who want to visit your Pinterest page are probably actively following your activities on other platforms so be sure to let content flow across platforms.
  4. Pin lots of things, and lots of different kinds of things. Include decent captions though be aware that if you are feeding Twitter you need to fit character limits.
  5. Use it to collect images you have found online, images you’ve taken yourself during your fieldwork, and invite the communities you are working with to contribute.
  6. Backup and export things once in a while for safe keeping. There is no built-in export function, but there are a wide variety of hacks and workarounds for transporting your archive.

 

What You Get

  1. Pinterest makes it easy to track the progress of the image data you gather. You may find yourself taking more photos in the field because they can be easily arranged, saved and categorized.
  2. Using it regularly adds another level of data as photos and documents captured on phone and then added on Pinterest can be quickly field captioned and then re-catalogued, giving you a chance to review the visual and built environment of your field site and interrogate your observations afresh.
  3. Visually-enhanced constant comparative methods: post-data collection, you can go beyond notes to images and captions that are easily scanned for patterns and points of divergence. This may be going far beyond what Glaser and Strauss had imagined, of course.
  4. Perhaps most important, when you forget what something looks like when you’re writing up your results, you’ve got an instant, easily searchable database of images and clues to refresh your memory.

Why We Like It

  1. It’s great for spontaneous presentations. Images are such an important part of presenting any research. Having a quick publically accessible archive of content allows you to speak, on the fly, about what you are up to. You can’t give a tour of your Pinterest page for a job talk. But having the resource there means you can call on images quickly during a Q&A period, or quickly load something relevant on a phone or browser during a casual conversation about your work.
  2. It gives you a way to interact with subjects. Having the Pinterest link allows you to show a potential research subject what you are up to and what you are interested in. During interviews it allows you to engage people on their interpretation of things. Having visual prompts handy can enrich and enliven any focus group or single subject interview. These don’t only prompt further conversation, they can prompt subjects to give you even more links, images, videos and other digital artifacts.
  3. It makes your research interests transparent. Having the images, videos and artifacts for anyone to see is a way for us to show what we are doing. Anyone with interest in the project and the board link is privy to our research goals. Our Pinterest page may be far less complicated than many of our other efforts to explain our work to a general audience.
  4. You can disseminate as you go. If you get the content flow right, you can tell people about your research as you are doing it. Letting people know about what you are working on is always a good career strategy. Giving people images rather than article abstracts and draft chapters gives them something to visualize and improves the ambient contact with your research community
  5. It makes digital artifacts more permanent. As long as you keep your Pinterest, what you have gathered can become a stable resource for anyone interested in your subjects. As sites and material artifacts change, what you have gathered offers a permanent and easily accessible snapshot of a particular moment of inquiry for posterity.

 

Pinterest Wish-list

One of us is a Windows Phone user (yes really) and it would be great if there was a real Pinterest app for the Windows Phone. One touch integration from the iPhone, much like Twitter, Facebook, and Flicker from the camera roll would be great (though there is an easy hack).

 

We wish it would be easier to have open, collaborative boards. Right now, the only person who can add to a board is you, at least at first. You can invite other people to join a “group board” via email, but Pinterest does not have open boards that allow anyone with a board link to add content.

 

Here’s a look at our Pinboards: Phil Howard’s Tech + Politics board, and Nikki Usher’s boards on U.S. Newspapers. We welcome your thoughts…and send us images!

 

 

 

 

Nikki Usher is an assistant professor at the George Washington University’s School of Media and Public Affairs. Her project is Post Industrial News Spaces and Places with Columbia’s Tow Center on Digital Journalism. Phil Howard is a professor at the Central European University and the University of Washington. His project is a book on Political Power and the Internet of Things for Yale University Press.

 

Announcements, Events, Past Events, Research

Digital Security and Source Protection For Journalists: Research by Susan McGregor

3

EXECUTIVE SUMMARY

The law and technologies that govern the functioning of today’s digital communication systems have dramatically affected journalists’ ability to protect their sources.  This paper offers an overview of how these legal and technical systems developed, and how their intersection exposes all digital communications – not just those of journalists and their sources – to scrutiny. Strategies for reducing this exposure are explored, along with recommendations for individuals and organizations about how to address this pervasive issue.

 

DOWNLOAD THE PDF

GitBookCover

 

 

 



Order a (bound) printed copy.

 

DIGITAL SECURITY AND SOURCE PROTECTION FOR JOURNALISTS

Preamble

Digital Security for Journalists A 21st Century Imperative

The Law: Security and Privacy in Context

The Technology: Understanding the Infrastructure of Digital Communications

The Strategies: Understanding the Infrastructure of Digital Communications

Looking Ahead

Footnotes

 

Research

Knight Foundation joins The Tow Foundation as a sponsor for the initiative headed by Columbia University’s Tow Center for Digital Journalism

5

Knight Foundation joins The Tow Foundation as a sponsor for the initiative headed by Columbia University’s Tow Center for Digital Journalism

Tow Center program defends journalism from the threat of mass surveillance ” by Jennifer Henrichsen and Taylor Owen on Knight Blog 

NEW YORK – June 10, 2014 – The Journalism After Snowden initiative, a project of The Tow Center for Digital Journalism at Columbia University Graduate School of Journalism, will expand to further explore the role of journalism in the age of surveillance, thanks to new funding from the John S. and James L. Knight Foundation.

Journalism After Snowden will contribute high-quality conversations and research to the national debate around state surveillance and freedom of expression through a yearlong series of events, research projects and articles that will be published in coordination with the Columbia Journalism Review.

Generous funding from The Tow Foundation established the initiative earlier in the academic year. The initiative officially kicked off in January with a high-level panel of prominent journalists and First Amendment scholars who tackled digital privacy, state surveillance and the First Amendment rights of journalists.

Read more in the press release from the Knight Foundation.

Past Events

Glenn Greenwald Speaks | Join the Tow Center for an #AfterSnowden Talk in San Francisco on June 18, 2014

6

Join the Tow Center for an evening lecture with Glenn Greenwald, who will discuss the state of journalism today and his recent reporting on surveillance and national security issues, on June 18, 2014 at 7pm at the Nourse Theater in San Francisco.

In April 2014, Greenwald and his colleagues at the Guardian received the Pulitzer Prize for Public Service. Don’t miss Greenwald speak in-person as he fits all the pieces together, recounting his high-intensity eleven-day trip to Hong Kong, examining the broader implications of the surveillance detailed in his reporting, and revealing fresh information on the NSA’s unprecedented abuse of power with never-before-seen documents entrusted to him by Snowden himself.  Sponsored by: Haymarket Books, Center for Economic Research and Social Change, Glaser Progress Foundation, Tow Center for Digital Journalism – Columbia Journalism School, reserve your seat for Glenn Greenwald Speaks / Edward Snowden, the NSA, and the U.S. Surveillance State.

Please note: this is a ticketed event. Tickets are $4.75 each.  | Purchase Tickets

This event is part of Journalism After Snowden, a yearlong series of events, research projects and writing from the Tow Center for Digital Journalism in collaboration with the Columbia Journalism Review. For updates on Journalism After Snowden, follow the Tow Center on Twitter @TowCenter #AfterSnowden.

Journalism After Snowden is funded by The Tow Foundation and the John S. and James L. Knight Foundation.

Lauren Mack is the Research Associate at the Tow Center. Follow her on Twitter @lmack.

Past Events

Tow Center Launches Amateur Footage: A Global Study of User-Generated Content in TV and Online News Output

10

Crediting is rare, there’s a huge gulf in how senior managers and newsdesks talk about it and there’s a significant reliance on news agencies for discovery and verification. These are some of the key takeaways of Amateur Footage: A Global Study of User-Generated Content in TV and Online News Output published today by the Tow Center of Digital Journalism.

 

The aim of this research project was to provide the first comprehensive report about the use of user-generated content (UGC) among broadcast news channels. UGC being – for this report – photographs and videos captured by people unrelated to the newsroom, who would not describe themselves as professional journalists.

 

Some of the Principle Findings are:

  • UGC is used by news organizations daily and can produce stories that otherwise would not, or could not, be told. However, it is often used only when other imagery is not available. 40% of UGC on television was related to Syria.
  • There is a significant reliance on news agencies in terms of discovering and verifying UGC. The news agencies have different practices and standards in terms of how they work with UGC.
  • News organizations are poor at acknowledging when they are using UGC and worse at crediting the individuals responsible for capturing it. Our data showed that: 72 percent of UGC was not labeled or described as UGC and just 16 percent of UGC on TV had an onscreen credit.
  • News managers are often unaware of the complexities involved in the everyday work of discovering, verifying, and clearing rights for UGC. Consequently, staff in many newsrooms do not receive the training and support required to develop these skills.
  • Vicarious trauma is a real issue for journalists working with UGC every day – and it’s different from traditional newsroom trauma. Some newsrooms are aware of this – but many have no structured approach or policy in place to deal with it.
  • There is a fear amongst rights managers in newsrooms that a legal case could seriously impact the use of UGC by news organisations in the future

 

This research was designed to answer two key questions.  First, when and how is UGC used by broadcast news organizations, on air as well as online?  Second, does the integration of UGC into output cause any particular issues for news organizations? What are those issues and how do newsrooms handle them?

 

The work was completed in two phases. The first involved an in-depth, quantitative content analysis examining when and how eight international news broadcasters use UGC.  1,164 hours of TV output and 2,254 Web pages were analyzed here. The second was entirely qualitative and saw the team interview 64 news managers, editors, and journalists from 38 news organizations based in 24 countries across five continents. This report takes both phases to provide a detailed overview of the key findings.

 

The research provides the first concrete figures we have about the level of reliance on UGC by international news channels. It also explores six key issues that newsrooms face in terms of UGC. The report is designed around those six issues, meaning you can dip into any one particular issue:

1) Workflow – how is UGC discovered and verified? Do newsrooms do this themselves, and if so, which desk is responsible? Or is UGC ‘outsourced’ to news agencies?

2) Verification – are there systematic processes for verifying UGC? Is there a threshold that has to be reached before a piece of content can be used?

3) Permissions – how do newsrooms seek permissions? Do newsrooms understand the copyright implications around UGC?

4) Crediting – do newsrooms credit UGC?

5) Labeling – are newsrooms transparent about the types of UGC that they use in terms of who uploaded the UGC and whether they have a specific agenda?

6) Ethics and Responsibilities – how do newsrooms consider their responsibilities to uploaders, the audience and their own staff?

 

The full report can be viewed here.

Events

Tow Tea: Understanding the Role of Algorithms and Data at BuzzFeed

0

Tow Tea: Understanding the Role of Algorithms and Data at BuzzFeed
Thursday, Dec. 2, 2014
4:00 pm – 6:00 pm
The Brown Institute for Media Innovation
RSVP encouraged via Eventbrite

Ky Harlin, Director of Data Science for Buzzfeed will join us along with an editor and reporter from Buzzfeed. Together, they will help us understand the relationship between content and data—How does Buzzfeed predict whether a story will go viral?  What is shareability?  Do reporters and editors at Buzzfeed make editorial decisions based on input from data scientists who track traffic and social networks?  What is the day-to-day workflow like at Buzzfeed and how are methods employed different than those used in traditional newsroom settings?

Read more about Ky here: http://bit.ly/1n3xc3B

RSVP encouraged.  Tea, Coffee, and Dessert snacks will be served.

For questions about this event, please contact Smitha Khorana, Tow Center DMA, at sk3808@columbia.edu.

Events

Upcoming Events – Journalism After Snowden

0

Yale_Columbia_Flyer3

Journalism After Snowden: Normalizing Surveillance
A lecture with Ethan Zuckerman
November 18th, 2014
12:00 pm – 1:30 pm

RSVP via Eventbrite

Does the normalization of commercial surveillance help explain the mixed reaction Americans have had towards revelations of widespread government surveillance by Snowden and other whistle-blowers?

 

Events

Upcoming Events: #Ferguson

2

#Ferguson: Reporting a Viral News Story

Thursday, October 23rd, 7 PM

Lecture Hall, Columbia Journalism School

We are hosting an event this Thursday with Antonio French of the City of St. Louis, Wesley Lowery of the Washington Post, Alice Speri of VICE News,  and Zeynep Tufekci, Professor at the University of North Carolina.  The panel will be moderated by Emily Bell.  It is open to the public—Register here for tickets: www.reportingferguson.eventbrite.com