Past Events

Journalism After Snowden Lecture Series

0

Journalism After Snowden Lecture Series

Presented by: Columbia University Graduate School of Journalism and the Information Society Project at Yale Law School

 

September 29, 2014

6:00pm-7:30pm, with reception to follow

Brown Institute for Media Innovation at Columbia University Graduate School of Journalism

 

Source Protection: Rescuing a Privilege Under Attack

Speaker: David A. Schulz, Partner at Levine Sullivan Koch & Schulz, LLP

Moderator: Emily Bell, Director of the Tow Center for Digital Journalism

 

Watch Full Lecture 

David Schulz | Outside Counsel to The Guardian; Lecturer, Columbia Law School; Partner, Levine Sullivan Koch & Schulz LLP | @LSKSDave
David Schulz heads the New York office of Levine Sullivan Koch & Schulz, L.L.P. a leading media law firm with a national practice focused exclusively on the representation of news and entertainment organizations in defamation, privacy, newsgathering, access, copyright, trademark and related First Amendment matters. Schulz has been defending the rights of journalists and news organizations for nearly 30 years, litigating in the trial courts of more than 20 states, and regularly representing news organizations on appeals before both state and federal tribunals. Schulz successfully prosecuted access litigation by the Hartford Courant to compel the disclosure of sealed dockets in cases being secretly litigated in Connecticut’s state courts, and the challenge by 17 media organizations to the closure of jury selection in the Martha Stewart criminal prosecution. He successfully defended against invasion of privacy claims brought by Navy SEALS whose photos with injured Iraqi prisoners were discovered on-line by a reporter, and has prevailed in Freedom of Information Act litigation pursued by the Associated Press to compel the release of files relating to detainees held by the Department of Defense at Guantanamo Bay and to records of the military service of President George W. Bush. Schulz is described as an “incredibly skilled” litigation strategist and a “walking encyclopedia” of media law by Chambers USA (Chambers & Partners, 2006), and is recognized as one of the nation’s premier First Amendment lawyers by The Best Lawyers in America (Woodward/White, 2006). He regularly represents a broad range of media clients, including The New York Times, Associated Press, CBS Broadcasting, Tribune Company, and The Hearst Corporation, along with other national and local newspapers, television networks and station owners, cable news networks, and Internet content providers. Schulz is the author of numerous articles and reports, including Policing Privacy, 2007 MLRC Bulletin 25 (September 2007); Judicial Regulation of the Press? Revisiting the Limited Jurisdiction of Federal Courts and the Scope of Constitutional Protection for Newsgathering, 2002 MLRC Bulletin 121 (April 2002); Newsgathering as a Protected Activity, in Freedom of Information and Freedom of Expression: Essays in Honour of Sir David William (J. Beatson & Y. Cripps eds., Oxford University Press 2000); and Tortious Interference: The Limits of Common Law Liability for Newsgathering, 4 Wm. & Mary Law Bill Rts. J. 1027 (1996) (with S. Baron and H. Lane). He received a B.A. from Knox College in Galesburg, Illinois, where he has served for more than twenty years on the Board of Trustees. He received his law degree from Yale Law School, and holds a master’s degree in economics from Yale University.

 

 

Events

Upcoming Events: The Tow Responsive Cities Initiative

0

The Tow Responsive Cities Initiative
Workshop with Susan Crawford
Friday, 10/31 – 9:00 am

By invitation only

The extension of fiber optic high-speed Internet access connections across cities in America could provide an opportunity to remake democratic engagement over the next decade. Cities would have the chance to use this transformative communications capacity to increase their responsiveness to constituents, making engagement a two-way, nuanced, meaningful part of what a city does. The political capital that this responsiveness would generate could then be allocated to support big ideas that could address the problems facing many American cities, including growing inequality, diminishing quality of life, and movement of jobs outside the city’s borders.

announcements-home, Events, Past Events

Recap: Source Protection in the Information Age

0

“Assert the right to report.” That was the mandate Columbia’s Sheila Coronel gave our group of journalists and online privacy and security advocates this past Saturday morning, kicking off a day full of panels and workshop activities on the theme of “Source Protection in the Information Age.” In this post-Snowden age, we were reminded,
as scrutiny from the government and other authority structures intensifies, simple source protection becomes something more. As Aaron Williamson put it succinctly in the morning’s first panel: “Using encryption is activism. It’s standing up for your right to keep communications private.”

How to be an effective activist then? The day’s emphasis was intensely practical: Know your tools. We each had the opportunity to cycle through 6 of 14 available workshops. The spread covered effectively the typical activities journalists engage in: research, communication and writing. That translated into focuses on encrypted messaging via chat and email, location anonymous browsing via Tor, and access to desktop tools like the portable Tails operating system, which enables journalists to securely develop and store their research and writing. Snowden used Tails himself to escape the NSA’s scrutiny. We also received timely reminders about creating secure passwords and remembering that third parties are aware of our every move online.

Throughout, we were reminded of an important fact: You’re only as strong as your weakest participant. So journalists need not only to embrace these tools, they also need to educate their sources in how to use them effectively. They also need to learn how to negotiate the appropriate means and levels of security for communication with sources.

That’s where the user experience of these tools becomes so important. The most successful tools are bound to be those which are quick to install and intuitive to use. If some of those tools were as easy to download and install as a browser or plugin (Tor, Ghostery), others involved complex steps and technical knowledge, which might intimidate some users. That fact underlines the need to apply user-centered design principles to these excellent tools if they’re to be universally adopted. We have to democratize access to them.

Another tension point was the concern that using secure tools actually draws attention to the individual. A valid fear, perhaps, but the answer isn’t to abandon the tools but to employ them more often, even when security isn’t a concern. Increase the signal to noise ratio. On that note, the day was a success. Many of us, who were more or less aware of this issue, left not just enriched with more knowledge, but with laptops sporting a few more tools to empower us as activists.

Robert Stribley is the Associate Experience Director at Razorfish. You can follow him on Twitter at @stribs.

For resources and information about this and future events, visit our Source Protection: Resources page, and follow organizers/hosts Sandy Ordonez, Susan McGregor, Lorenzo Francesi-Biccherai and the Tow Center on Twitter.

announcements-home, Events, Past Events, Tips & Tutorials

Source Protection: Resources

2
We are happy to report that many of the attendees of our October 11 workshop on Source Protection in the Information Age left with a good foundation in digital security, and trainers gained a better understanding of the challenges journalists face in becoming more secure. 
This was a collaboratively organized event that brought together organizations and individuals passionate about the safety and security of journalists. We remain committed to continue supporting this collaboration, and will be planning future workshops. 
If you weren’t able to attend the event, we recommend starting with this brief recap. In addition, we would like to share some resources that you may find useful for continuing to develop your skills and understandings in this area.
Enjoy!
The organizers
(Lorenzo, Susan, Sandy & George)

Workshop Panel Videos

Panel 1: How technology and the law put your information at risk

Runa Sandvik, James Vasile, Aaron Williamson | Moderated by Jenn Henrichsen

Panel 2: Source protection in the real world – how journalists make it work

Online Resources

Workshop Resources

Online Library

Tactical Tech Collective

Tactical Tech’s Privacy & Expression program builds digital security awareness and skills of independent journalists, and anyone else who is concerned about the security risks and vulnerabilities of digital tools. On their website you can find manuals, short films, interactive exercises and well designed how-to’s. 

Upcoming Privacy & Security Events

October 20 | 6:30pm | Tracked Online:  How its done and how you can protect yourself
Techno-Activism 3rd Mondays (TA3M) is a community-run monthly meetup that happens in 21 cities throughout the world. It is a good place to meet and learn from individuals that work on anti-surveillance and anti-censorship issues. The October edition of NYC TA3M will feature former product lead of Ghostery who will explain how 3rd parties track you online, what information they collect, and what you can do to protect yourself. If you would like to be alerted of upcoming TA3m events, contact Sandra Ordonez @ sandraordonez@openitp.org
RSVP: 

Circumvention Tech Festival

The Circumvention Tech Festival will occur on March 1-6 in Valencia, Spain. The festival gathers the community fighting censorship and surveillance for a week of conferences, workshops, hackathons, and social gatherings, featuring many of the Internet Freedom community’s flagship events. This includes a full day of journo security events, which will be conducted both in English and Spanish. This is a great opportunity to meet the digital security pioneers. 
RSVP: 

 

Research

The New Global Journalism: New Tow Center Report

2
Throughout the twentieth century the core proposition of foreign correspondence was to bear witness—to go places where the audience couldn’t and report back on what occurred.Three interrelated trends now challenge this tradition. First, citizens living through events can tell the world about them directly via a range of digital technologies. Second, journalists themselves have the ability to report on some events, particularly breaking news, without physically being there.  Finally, the financial pressures that digital technology have brought to legacy news media have forced many to close their international bureaus.In this age of post-legacy media, local reporters, activists, ordinary citizens—and traditional foreign correspondents—are all now using digital technologies to inform the world of breaking news, and to offer analysis and opinions on global trends. These important changes are documented in the Tow Center’s report The New Global Journalism: Foreign Correspondence in Transition.The report’s authors include Kelly Golnoush Niknejad, founder of Tehran Bureau, a digital native site based in London and reporting on Iran, using dispatches from correspondents both inside and outside the country. Anup Kaphle, digital foreign editor of The Washington Post, profiles the legacy foreign desk in transition, as it seeks to merge the work of traditional correspondents and the contributions of new digital reporters, who may never leave the newsroom as they write about faraway events.

These new practices require new skills, and Ahmed Al Omran, a Wall Street Journal correspondent in Saudi Arabia, walks reporters through eight tactics that will improve their use of digital tools to find and verify information on international stories. Internet security issues raised by this work is discussed in another chapter, by Burcu Baykurt, PhD candidate at Columbia Journalism School, who also examines how Internet governance affects journalists and others using the web to disseminate information.

And Jessie Graham, a former public radio reporter who is now a multimedia producer for Human Rights Watch, describes the shifting line between advocacy and journalism. Some of the journalists affected by closings of foreign bureaus have made comfortable transitions to jobs in advocacy organizations — while those same organizations have increasingly developed their media skills to communicate directly with audiences, without the filter of mainstream media.

Through practical guidance and descriptions of this changing journalistic ecosystem, the Tow Center hopes that The New Global Journalism can help define a new, hybrid foreign correspondent model—not a correspondent who can do everything, but one open to using all reporting tools and a wide range of sources to bring audiences a better understanding of the world.

 

Download the Full Report Here: The New Global Journalism

Chapters:

Being There: The Virtual Eyewitness, By Kelly Golnoush Niknejad

The Foreign Desk in Transition: A Hybrid Approach to Reporting From There—and Here, By Anup Kaphle

A Toolkit: Eight Tactics for the Digital Foreign Correspondent, By Ahmed Al Omran

David Versus Goliath: Digital Resources for Expanded Reporting—and Censoring,  By Burcu Baykurt

A Professional Kinship? Blurring the Lines Between International Journalism and Advocacy, By Jessie Graham

Edited By: Ann Cooper and Taylor Owen

Announcements, Past Events, Research

Sensors and Certification

0

This is a guest post from Lily Bui, a sensor journalism researcher from MIT’s Comparative Media Studies program.

On October 20, 2014, Creative Commons Science convened a workshop involving open hardware/software developers, lawyers, funders, researchers, entrepreneurs, and grassroots science activists around a discussion about the certification of open sensors.

To clarify some terminology, a sensor can either be closed or open. Whereas closed technologies are constrained by an explicitly stated intended use and design (e.g., an arsenic sensor you buy at Home Depot), open technologies are intended for modification and not restricted to a particular use or environment (e.g., a sensor you can build at home based on a schematic you find online).

Over the course of the workshop, attendees listened to sessions led by practitioners who are actively thinking about whether and how a certification process for open hardware might mitigate some of the tensions that have arisen within the field, namely around the reliability of open sensor tools and the current challenges of open licensing. As we may gather from the Tow Center’s Sensors and Journalism report, these tensions become especially relevant to newsrooms thinking of adapting open sensors for collecting data in support of journalistic inquiry. Anxieties about data provenance, sensor calibration, and best practices on reporting sensor data also permeate this discussion. This workshop provided a space to begin articulating the needs required for sensor journalism to move forward.

Below, I’ve highlighted the key points of discussion around open sensor certification, especially as they relate to the evolution of sensor journalism.

Challenges of Open Sensors

How, when, and why do we trust a sensor? For example, when we use a thermometer, do we think about how well or often it has been tested, who manufactured it, or what standards were used to calibrate it? Most of the time, the answer is no. The division of labor that brings the thermometer to you is mostly invisible, yet you inherently trust that the reading it gives is an accurate reflection of what you seek to measure. So, what is it that instantiates this automatic trust, and what needs to happen around open sensors for people to likewise have confidence in them?

At the workshop, Sonaar Luthra of Water Canary led a session about the complexities and challenges that accompany open sensors today. Most concerns revolve around accuracy, both of the sensor itself and the data it produces. One reason for this is that the manufacture and integration of sensors are separate processes (that is to say, for example, InvenSense manufactures an accelerometer and Apple integrates it into the iPhone). Similarly, within the open source community, the development and design of sensors and their software are often separate processes from an end user’s assembly—a person looks up the open schematic online, buys the necessary parts, and builds it at home. This division of labor erodes the boundaries between hardware, software, and data, obviating a need to recast how trust is established in sensor-based data.

For journalists, a chief concern around sensor data is ensuring, with some degree of confidence, that the data collected from the sensor is not erroneous and won’t add misinformation to the public sphere if published. Of course, this entirely depends on how and why the sensor is being used. If we think of accuracy as a continuum, then the degree of accuracy can vary depending on the context. If the intent is to gather a lot of data and look at general trends—as was the case with the Air Quality Egg, an open sensor that measures air quality—point-by-point accuracy is less of a concern when engagement is the end goal. However, different purposes and paradigms require different metrics. In the case of StreetBump, a mobile app that uses accelerometer data to help identify potential potholes, accuracy is a much more salient issue as direct intervention from the city would mean allocating resources and labor toward location data a sensor suggests. Thus, creating a model to work toward shared parameters, metrics, resources, and methods might be useful to generate consensus and alleviate factors that threaten data integrity.

There may also be alternative methods for verification and accounting for known biases in sensor data. Ushahidi’s Crowdmap is an open platform used internationally to crowdsource crisis information. The reports depend on a verification system from other users for an assessment of accuracy. One can imagine a similar system for sensor data, pre-publication or even in real time. Also, if a sensor has a known bias in a certain direction, it’s also possible to compare data against an established standard (e.g., EPA data) and account for the bias in reporting on the data.

To further investigate these questions, we can look toward extant models of verification in open science and technology communities. The Open Geospatial Consortium provides a way of thinking about interoperability among sensors, which requires that a consensus around standards or metrics be established. Alternatively, the Open Sensor Platform suggests ways of thinking about data acquisition, communication, and interpretation across various sensor platforms.

Challenges of Open Licensing for Sensors

A handful of licensing options exist for open hardware, including the CERN Open Hardware License, Open Compute Project License, and Solderpad License. Other intellectual property strategies include copyright (which can be easily circumvented and is sometimes questionable when it comes to circuits), patenting (which is difficult and costly to attain), and trademark (an option that offers a lower barrier to entry and would best meet the needs of open source approaches). However, the issue of whether or not formal licensing should be applied to open hardware is still questionable, as it would inevitably impose restrictions on a design or version of hardware that—within the realm of open source—is still susceptible to modification by the original developer or the open source community writ large. In other words, a licensing or certification process would transition what is now an ongoing project into a final product.

Also, in contrast to open software, wherein the use of open code is clearly demarcated and tracked by the process of copying and pasting, it is less clear at what point a user actually agrees to using open hardware (i.e., upon purchase or assembly, etc.) since designs often involve a multitude of components and are sometimes accompanied by companion software.

A few different approaches to assessing open sensors emerged during the workshop:

  1. Standards. A collaborative body establishes interoperable standards among open sensors, working for independent but overlapping efforts. (Targeted toward the sensor.)
  2. Certification/Licensing. A central body controls a standard, facilitates testing, and manages intellectual property. (Targeted toward the sensor.)
  3. Code of conduct. There exists a suggestion of uses and contexts for the sensor, i.e., how to use it and how not to use it. (Targeted toward people using the sensor.)
  4. Peer assessment. Self-defined communities test and provide feedback on sensors as seen in the Public Lab model. (Targeted toward the sensor but facilitated by people using it.)

In the case of journalism, methods of standardization would depend on how much (or little) granularity of data is necessary to effectively tell a story. In the long run, it may be that the means of assessing a sensor will be largely contextual, creating a need to develop a multiplicity of models for these methods.

Preliminary Conclusions

While there is certainly interest from newsrooms and individual journalists in engaging with sensor tools as a valid means for collecting data about their environments, it is not yet apparent what newsrooms and journalists expect from open sensors and for which contexts open sensor data is most appropriate. The products of this workshop are relevant to evaluating what standards—if any—might be necessary to establish before sensors can be more widely adapted into newsrooms.

In the future, we must keep some important questions in mind: What matters most to newsrooms and journalists when it comes to trusting, selecting, and using a sensor tool for reporting? Which sensor assessment models would be most useful, and in which context(s)?

With regard to the certification of open sensors, it would behoove all stakeholders—sensor journalists included—to determine a way to move the discourse forward.

References

  1. Pitt, Sensors and Journalism, Tow Center for Digital Journalism, May 2014.
  2. Bourbakis and A. Pantelopoulos, “A Survey on Wearable Sensor-based Systems for Health Monitoring and Prognosis,” Systems, Man, and Cybernetics, Part C: Applications and Reviews, Vol. 40, Iss. 1 (IEEE, Jan. 2010).
  3. Open Source Hardware Association (OSHWA), Definition page.

Research

Sensors and Journalism: ProPublica, Satellites and The Shrinking Louisiana Coast

2

Two months before the programmers journalists at ProPublica would be ready to go live with an impressive news app illustrating the huge loss of land along the Louisiana coast line, the development team gathered in their conference room above the financial district in Manhattan.

This was partly a show-off session and partly a review. Journalist-developers Brian Jacobs and Al Shaw pulled a browser up onto the glossy 46-inch screen and loaded up their latest designs. At first it appears to be simple satellite photography, spanning about 20,000 square miles, but the elegance hides a complicated process to pull layers of meaning from many rich data sets.

At the heart of the story is the fact that the Louisiana coastline loses land at a rate equivalent to a football field each hour. That comes to 16 square miles per year. The land south of New Orleans has always been low-lying, but since the Army Corps of Engineers built levees along the Mississippi after the huge 1927 floods, the delta has been losing ground. Previously, the river carried sediment down and deposited it to gradually build up dry land throughout the delta. The same levees that protect upstream communities also block that sediment from reaching the upstream river and floating down to become Louisiana coastline. Environmental researchers say that the energy industry’s canal-dredging and well-drilling have accelerated natural erosion. Together, the constricted river and the oil extraction have exacerbated the effect of sea level rises from climate change.

The loss of ground endangers people: The dry land used to provide protection to New Orleans’ people and businesses, because when storms like Hurricane Katrina sweep in from the Gulf Coast, they lose power as they move from the water to land. It’s therefore crucial to have a wide buffer between the sea and the city. Now, with 2,000 fewer acres of protective land, the state will have to spend more money building tougher, higher walls, flood insurance will be more costly, infrastructure could break and the people inside those walls risk death and injury at much higher rates. If the land-loss isn’t slowed the costs will get higher.

Satellites Clearly Show The Story

For this story, Al Shaw’s goal was to illustrate the scale and severity of the problem. Print journalists have written extensively on the story. But the forty years worth of remote sensing data available from NASA’s Landsat satellites helped the ProPublica journalists to show the story with immediate power and clarity. They processed Landsat 8 sensing data themselves and drew on the US Geological Survey’s interpretations of data from earlier Landsat craft.

The project combines a high-level view with eight zoomed-in case studies. The scene of land, marsh and water known locally as the Texaco Canals forms one of the most dramatic examples. Starting with data collected from aerial photography in the 1950s and ending with 2012 satellite data, the layered maps show how the canals sliced up the marshlands and the relocated soil stopped sediments replenishing the land. The result is an area that starts mostly as land, and ends mostly as open water. Contemporary and archival photos complement the birds-eye view with a human level perspective.

This is Satellite Sensing’s Learning Curve

At this point, we need to reveal a conflict of interest. In February 2014 The Tow Center provided training to four journalists from ProPublica. Lela Prashad, a remote sensing specialist who has worked with NASA led a two day workshop covering the fundamental physics of satellite sensing, a briefing on the different satellite types and qualities, where to find satellite data and the basics of how to process it. ProPublica news apps director Scott Klein had been at a Tow Center journalistic sensing conference eight months earlier to see a presentation by Arlene Ducao and Ilias Koen on their satellite infra-red maps of Jakarta and saw that ProPublica’s innovative newsroom might be able to use remote sensing to cover some of their environmental stories in new ways.

The ProPublica journalists, to produce this work, learnt the physics and applications of remote sensing technology. The earth’s surface pushes energy out into the atmosphere and space – some is an immediate reflection of the sun’s rays, some is energy absorbed earlier. Human sources like city lights and industrial activity also produce emissions. Energy waves range from the high-frequency, short wavelength gamma-rays and x-rays, through ultra-violet then into the visible spectrum (what human eyes sense) and on towards the longer wavelengths of infra-red, microwave and radio.
Satellites flown by NASA (and increasing numbers of private companies) point cameras towards Earth taking pictures of the energy which passes through the atmosphere. Those are the ultraviolet, visible and infrared bands. (The various generations of satellites have had different capabilities. As they develop, they have recorded Earth with more detail and pass overhead more frequently.)
Those scenes, when processed, can reveal with great accuracy the materials that form Earth’s surface. The exact hue of each pixel represents the specific substance below. Geologists needing to identify types of rock take advantage of the fact that, for example, sandstone reflects a different combination of energy waves than granite. Food security analysts can assess the moisture, and therefore the health, of a country’s wheat crop – helping them predict shortages (and speculators predict pricing). ProPublica is showing the Louisiana coast-line change over time from dry land, to marsh, to open water.

The US Geological Survey (USGS) makes its data available through a series of free online catalogues. Registered users can nominate the area they are interested in, pick a series of dates and download image files which include all the available energy bands. Crucially, those image files include the Geographic Information Systems (GIS) meta-data that allow the journalists to precisely match the pixels in data files to known locations.

 

How The Developers Built it

Brian Jacobs learned how to to reproduce and combine the information in an accessible form for ProPublica’s online audience. The opening scene of the app has eight layers. The top one uses scanned copy of a 1922 survey map owned by the USGS and scanned by the Louisiana State University library. Jacobs pulled it into his mapping software to match the geographic features with GIS location data and used photoshop to prepare it for online display; cutting out the water and normalizing the color.

The bottom layer displays the 2014 coastline – stitched together from six Landsat 8 tiles, including many steps of processing. Jacobs picked out images from satellite passes when the skies were free from cloud cover. After pulling in the image tiles from the infrared and true-color bands and merging them together, Jacobs normalized the distortions and color differences so the separate images would mosaic consistently.
Working with the command-line tools, GDAL (a geospatial library) and ImageMagick (an image editing suite), he prepared them for online display. Pictures of the Earth’s curved surface need to be slightly warped to make sense on a flat images, the types of warps are called projections. The raw USGS images come in the space industry’s WGS84 projection standard, but the web mostly uses Mercator. (Here’s Wikipedia‘s explanation, and xkcd’s cartoon version.)

Researchers who work with remote sensing have a specific language and sets of practices for how they treat color in their visualizations. The ProPublica journalists adopted some of those practices, but also needed to produce their work for a lay audience. So, although the features on ProPublica’s maps are easily recognizable, they are not what’s known as ‘true color’. When viewers look closely at the bottom layer, it’s clear that these are not simply aerial photographs. In comparison to satellite photography displayed via Google maps, the ProPublica layer has a much sharper contrast between land and water. The green pixels showing land are vibrant, while the blue sections showing water are rich, deep blues.

The color palette is, in fact, a combination of two sets of satellite data: The water pixels are interpreted from Landsat’s infrared and green bands, while the land pixels come from Landsat ‘true color’ red, green and blue bands, with extra sharpening from the panchromatic band (panchromatic appears as shades of gray, but can be interpreted to color). At 30m/pixel, Landsat’s color bands are lower resolution than its 15m/pixel panchromatic band.

Step By Step Frames

This is a detail of a single tile, in a single band, but color-corrected.

A detail of a single tile in the true-color space, somewhat color-corrected.

At this point, the developers have stitched together multiple tiles of their area, and combined images from many wavelength bands, a process known as pansharpening.

At this point, the developers have stitched together multiple tiles of their area, and combined images from true-color and panchromatic bands, a process known as pansharpening.

The water mask

This is the mask that ProPublica produced from the near-infrared and green bands. It’s used to make a distinction between the areas of land and water.

Pansharpened, zoomed

This frame shows the final result of ProPublica’s satellite image processing. At this point the images have been pansharpened and the water layer has been included from the near IR and green band.

The final view that ProPublica showed their users.

This shows the underlay for ProPublica’s case studies. This land pixels combine true color bands and the high-resolution panchromatic band. The water pixels come from the infrared and green bands.

Google satellite view, zoomed

The same area, as shown in Google maps’ satellite view. Mostly, it uses true-color satellite imagery for land, and bathymetry data for water.

A detail of the USGS map. Each color represents a period of land loss. ProPublica extracted each period to separate layers in their interactive map.

A detail of a USGS map of the region. Each color represents a period of land loss. ProPublica extracted each the pixels of period to separate layers into an interactive map.

The other layers come from a range of sources. In the opening scene, viewers can bring up overlays of the human building associated with the oil and gas industry: including the wells and pipelines, the dredged canals and the levees that protect the homes and businesses around the coastline.

When users zoom in to one of ProPublica’s seven case studies, they can scrub through another 16 layers. Each one shows a slice of time when the coast line receded. A layer of olive green pixels indicates the remaining dry land. The data for these 16 layers came from researchers at the US Geological Survey (USGS) who had analyzed 37 years of satellite data combined with historical surveys and mid-century aerial photography. ProPublica’s worked with John Barras at the USGS, a specialist who could draw on years of his own work and decades of published studies. He handed over a large geo-referenced image file exported from the software suite ERDAS Imagine. Each period’s land loss was rendered in a separate color

The Amount of Time, Skill and Effort

Scott Klein described this project as one of ProPublica’s larger ones, but not abnormally so. His team of developer-journalists release around twelve projects of this size each year, as well as producing smaller pieces to accompany work by the rest of ProPublica’s newsroom.

For six months, the project was a major focus for Al Shaw and Brian Jacobs. Both Shaw and Jacobs are young, highly skilled and prized developer-journalists. Al Shaw has a BA, and is also highly active in the New York’s hacks/hackers community. Brian Jacobs is a Knight-Mozilla Fellow working at ProPublica, with a background that includes a year at MIT’s Senseable City Lab and four years as a UI designer at Azavea, a Philadelphia based geospatial software company. They worked on it close to full time, with oversight from their director Scott Klein. During the later stages, ProPublica’s design director David Sleight advised on the interaction design, hired a freelance illustrator and led user-testing. ProPublica partnered with The Lens, a non-profit public-interest newsroom based in New Orleans, whose environmental reporter Bob Marshall wrote the text. The Lens also sourced three freelance photo researchers and photographers for ProPublica.

ProPublica Have Shared Their Tools

To produce the work, ProPublica had to extend the ‘simple-tiles’ software library they use to publish maps – a process that soaked up months of developer time. They’ve now open-sourced that code – a move which can radically speed up the development process for other newsrooms who have skilled developers. In common with most news organizations, the interactive maps ProPublica has historically published have used vector graphics: which display as outlines of relatively simple geographic and city features like states, roads and building footprints. This project renders raster (aka bitmap) images, the kind of file used for complicated or very detailed visual information.

ProPublica’s explanation about their update to simple-tiles is available on their blog, and the code is available via github.

Their Launch

ProPublica put the app live on the 28th of August, exactly nine years after the Hurricane Katrina made New Orleans’ mayor order the city’s first ever mandatory evacuation.

Announcements, announcements-home, Events, Research

Upcoming Events

0

All-Class Lecture: The New Global Journalism

Tuesday, Sep. 30, 2014, 6:00pm

(Lecture Hall)

Based on a new report from the Tow Center, a panel discussion on how digital technology and social media have changed the work of journalists covering international events. #CJSACL

Panelists include report co-authors: 

Ahmed Al OmranSaudi Arabia correspondent at The Wall Street Journal

Burcu BaykurtPh.D. candidate in Communications at Columbia Journalism School

Jessie GrahamSenior Multimedia Producer at Human Rights Watch

Kelly Golnoush NiknejadEditor-in-Chief at Tehran Bureau

Program will be moderated by Dean of Academic Affairs, Sheila Coronel

Event begins at 6 PM

RSVP is requested at JSchoolRSVP@Columbia.edu

Announcements, announcements-home, Events

Upcoming Tow Event: Just Between You and Me?

1

Just between you and me?

(Pulitzer Hall – 3rd Floor Lecture Hall)

In the wake of the Snowden disclosures, digital privacy has become more than just a hot topic, especially for journalists. Join us for a conversation about surveillance, security and the ways in which “protecting your source” means something different today than it did just a few years ago. And, if you want to learn some practical, hands-on digital security skills—including tools and techniques relevant to all journalists, not just investigative reporters on the national security beat—stick around to find out what the Tow Center research fellows have in store for the semester.

The event will be held at 6 p.m. on Monday, August 25th in the 3rd Floor Lecture Hall of Pulitzer Hall. We welcome and encourage all interested students, faculty and staff to attend.

How It's Made, Research

Hyper-compensation: Ted Nelson and the impact of journalism

1

NewsLynx is a Tow Center research project and platform aimed at better understanding the impact of news. It is conducted by Tow Fellows Brian Abelson, Stijn DeBrouwere & Michael Keller.

“If you want to make an apple pie from scratch, you must first invent the universe.” — Carl Sagan

Before you can begin to measure impact, you need to first know who’s talking about you. While analytics platforms provide referrers, social media sites track reposts, and media monitoring tools follow mentions, these services are often incomplete and come with a price. Why is it that, on the internet — the most interconnected medium in history — tracking linkages between content is so difficult?

The simple answer is that the web wasn’t built to be *fully* connected, per se. It’s an idiosyncratic, labyrinthine garden of forking paths with no way to navigate from one page to pages that reference it.

We’ve spent the last few months thinking about and building an analytics platform called NewsLynx which aims to help newsrooms better capture the quantitative and qualitative effects of their work. Many of our features are aimed at giving newsrooms a better sense of who is talking about their work. This seemingly simple feature, to understand the links among web pages, has taken up the majority of our time. This obstacle turns out to be a shortcoming in the fundamental architecture of the web. But without it, however, the web might never have succeeded.

The creator of the web, Tim Berners Lee didn’t provide a means for contextual links in the specification for HTML. The world wide web wasn’t the only idea for networking computers, however. Over 50 years ago an early figure in computing had a different vision of the web – a vision that would have made the construction of NewsLynx a lot easier today, if not completely unnecessary.

Around 1960, a man named Ted Nelson came up with an idea for a structure of linking pieces of information in a two-way fashion. Whereas links on the web today just point one way — to the place you want to go — pages on Nelson’s internet would have a “What links here?” capability so would know all the websites that point to your page.

And if you were dreaming up the ideal information web, this structure makes complete sense: why not make the most connections possible? As Borges writes, “I thought of a labyrinth of labyrinths, of one sinuous spreading labyrinth that would encompass the past and the future and in some way involve the stars.”

Nelson called his project Xanadu, but it had the misfortune of being both extremely ahead of its time and incredibly late to the game. Project Xanadu’s first and somewhat cryptic release debuted this year: over 50 years after it was first conceived.

In the mean time, Berners-Lee put forward HTML with its one-way links, in the early 90s and it took off into what we know today. And one of the reasons for the web’s success is its extremely informal, ad-hoc functionality: anyone can put up an HTML page and without hooking into or caring about a more elaborate system. Compared to Xanadu, what we use today is the quick and dirty implementation of a potentially much richer and also much harder to maintain ecosystem.

Two-way linking would make not only impact research easier but also a number of other problems on the web. In his latest book “Who Owns the Future?”, Jaron Lanier discusses two-way linking as a potential solution to copyright infringement and a host of other web maladies. His logic is that if you could always know who is linking where, then you could create a system of micropayments to make sure authors get proper credit. His idea has its own caveats, but it shows the systems that two-way linking might enable. Chapter Seven of Lanier’s book discusses some of the other reasons Nelson’s idea never took off.

The desire for two-way links has not gone away, however. In fact, the *lack* of two-way links is an interesting lens through which to view the current tech environment. By creating a central server that catalogs and makes sense of the one-way web, Google’s adds value with its ability to make the internet seem more like Project Xanadu. If two-way links existed, you wouldn’t need all of the features of Google Analytics. People could implement their own search engines with their own page rank algorithms based on publicly available citation information.

The inefficiency of one-way links left a hole at the center of the web for a powerful player to step in and play librarian. As a result, if you want to know how your content lives online, you have to go shopping for analytics. To effectively monitor the life of an article, newsrooms currently use a host of services from trackbacks and Google Alerts to Twitter searches and ad hoc scanning. Short link services break web links even further. Instead of one canonical URL for a page, you can have a bit.ly, t.co, j.mp or thousands of other custom domains.

NewsLynx doesn’t have the power of Google. But, we have been working on a core feature that would leverage Google features and other two-way link surfacing techniques to make monitoring the life of an article much easier: we’re calling them “recipes”, for now (#branding suggestions welcome). In NewsLynx, you’ll add these “recipes” to the system and it will alert you of all pending mentions in one filterable display. If a citation is important, you can assign it to an article or onto your organization more generally. We also have a few built-in recipes to get you started.

We’re excited to get this tool into the hands of news sites and see how it helps them better understand their place in the world wide web. As we prepare to launch the platform in the next month or so, check back here for any updates.

Past Events

Why We Like Pinterest for Fieldwork: Research by Nikki Usher and Phil Howard

7

Nikki Usher, GWU

Phil Howard, UW and CEU

7/16/2014

Anyone tackling fieldwork these days can chose from a wide selection of digital tools to put in their methodological toolkit. Among the best of these tools are platforms that let you archive, analyze, and disseminate at the same time. It used to be that these were fairly distinct stages of research, especially for the most positivist among us. You came up with research questions, chose a field site, entered the field site, left the field site, analyzed your findings, got them published, and shared your research output with friends and colleagues.

 

But the post-positivist approach that many of us like involves adapting your research questions—reflexively and responsively—while doing fieldwork. Entering and leaving your field site is not a cool, clean and complete process. We analyze findings as we go, and involve our research subjects in the analysis. We publish, but often in journals or books that can’t reproduce the myriad digital artifacts that are meaningful in network ethnography. Actor network theory, activity theory, science and technology studies and several other modes of social and humanistic inquiry approach research as something that involves both people and devices. Moreover, the dissemination of work doesn’t have to be something that happens after publication or even at the end of a research plan.

 

Nikki’s work involves qualitative ethnographic work at field sites where research can last from five months to a brief week visit to a quick drop in day. She learned the hard way from her research for Making News at The New York Times that failing to find a good way to organize and capture images was a missed opportunity post-data collection. Since then, Nikki’s been using Pinterest for fieldwork image gathering quite a bit. Phil’s work on The Managed Citizen was set back when he lost two weeks of field notes on the chaotic floor of the Republican National Convention in 2000 (security incinerates all the detritus left by convention goers). He’s been digitizing field observations ever since.

 

Some people put together personal websites about their research journey. Some share over Twitter. And there are plenty of beta tools, open source or otherwise, that people play with. We’ve both enjoyed using Pinterest for our research projects. Here are some points on how we use it and why we like it.

 

How To Use It

  1. When you start, think of this as your research tool and your resource.   If you dedicate yourself to this as your primary archiving system for digital artifacts you are more likely to build it up over time. If you think of this as a social media publicity gimmick for your research, you’ll eventually lose interest and it is less likely to be useful for anyone else.
  2. Integrate it with your mobile phone because this amps up your capacity for portable, taggable, image data collection.
  3. Link the board posts to Twitter or your other social media feeds. Pinterest itself isn’t that lively a place for researchers yet. The people who want to visit your Pinterest page are probably actively following your activities on other platforms so be sure to let content flow across platforms.
  4. Pin lots of things, and lots of different kinds of things. Include decent captions though be aware that if you are feeding Twitter you need to fit character limits.
  5. Use it to collect images you have found online, images you’ve taken yourself during your fieldwork, and invite the communities you are working with to contribute.
  6. Backup and export things once in a while for safe keeping. There is no built-in export function, but there are a wide variety of hacks and workarounds for transporting your archive.

 

What You Get

  1. Pinterest makes it easy to track the progress of the image data you gather. You may find yourself taking more photos in the field because they can be easily arranged, saved and categorized.
  2. Using it regularly adds another level of data as photos and documents captured on phone and then added on Pinterest can be quickly field captioned and then re-catalogued, giving you a chance to review the visual and built environment of your field site and interrogate your observations afresh.
  3. Visually-enhanced constant comparative methods: post-data collection, you can go beyond notes to images and captions that are easily scanned for patterns and points of divergence. This may be going far beyond what Glaser and Strauss had imagined, of course.
  4. Perhaps most important, when you forget what something looks like when you’re writing up your results, you’ve got an instant, easily searchable database of images and clues to refresh your memory.

Why We Like It

  1. It’s great for spontaneous presentations. Images are such an important part of presenting any research. Having a quick publically accessible archive of content allows you to speak, on the fly, about what you are up to. You can’t give a tour of your Pinterest page for a job talk. But having the resource there means you can call on images quickly during a Q&A period, or quickly load something relevant on a phone or browser during a casual conversation about your work.
  2. It gives you a way to interact with subjects. Having the Pinterest link allows you to show a potential research subject what you are up to and what you are interested in. During interviews it allows you to engage people on their interpretation of things. Having visual prompts handy can enrich and enliven any focus group or single subject interview. These don’t only prompt further conversation, they can prompt subjects to give you even more links, images, videos and other digital artifacts.
  3. It makes your research interests transparent. Having the images, videos and artifacts for anyone to see is a way for us to show what we are doing. Anyone with interest in the project and the board link is privy to our research goals. Our Pinterest page may be far less complicated than many of our other efforts to explain our work to a general audience.
  4. You can disseminate as you go. If you get the content flow right, you can tell people about your research as you are doing it. Letting people know about what you are working on is always a good career strategy. Giving people images rather than article abstracts and draft chapters gives them something to visualize and improves the ambient contact with your research community
  5. It makes digital artifacts more permanent. As long as you keep your Pinterest, what you have gathered can become a stable resource for anyone interested in your subjects. As sites and material artifacts change, what you have gathered offers a permanent and easily accessible snapshot of a particular moment of inquiry for posterity.

 

Pinterest Wish-list

One of us is a Windows Phone user (yes really) and it would be great if there was a real Pinterest app for the Windows Phone. One touch integration from the iPhone, much like Twitter, Facebook, and Flicker from the camera roll would be great (though there is an easy hack).

 

We wish it would be easier to have open, collaborative boards. Right now, the only person who can add to a board is you, at least at first. You can invite other people to join a “group board” via email, but Pinterest does not have open boards that allow anyone with a board link to add content.

 

Here’s a look at our Pinboards: Phil Howard’s Tech + Politics board, and Nikki Usher’s boards on U.S. Newspapers. We welcome your thoughts…and send us images!

 

 

 

 

Nikki Usher is an assistant professor at the George Washington University’s School of Media and Public Affairs. Her project is Post Industrial News Spaces and Places with Columbia’s Tow Center on Digital Journalism. Phil Howard is a professor at the Central European University and the University of Washington. His project is a book on Political Power and the Internet of Things for Yale University Press.

 

Announcements, Events, Past Events, Research

Digital Security and Source Protection For Journalists: Research by Susan McGregor

3

EXECUTIVE SUMMARY

The law and technologies that govern the functioning of today’s digital communication systems have dramatically affected journalists’ ability to protect their sources.  This paper offers an overview of how these legal and technical systems developed, and how their intersection exposes all digital communications – not just those of journalists and their sources – to scrutiny. Strategies for reducing this exposure are explored, along with recommendations for individuals and organizations about how to address this pervasive issue.

 

DOWNLOAD THE PDF

GitBookCover

 

 

 



Order a (bound) printed copy.

Comments, questions & contributions are welcome on the version-controlled text, available as a GitBook here:

http://susanemcg.gitbooks.io/digital-security-for-journalists/

DIGITAL SECURITY AND SOURCE PROTECTION FOR JOURNALISTS

Preamble

Digital Security for Journalists A 21st Century Imperative

The Law: Security and Privacy in Context

The Technology: Understanding the Infrastructure of Digital Communications

The Strategies: Understanding the Infrastructure of Digital Communications

Looking Ahead

Footnotes

 

Research

Knight Foundation joins The Tow Foundation as a sponsor for the initiative headed by Columbia University’s Tow Center for Digital Journalism

5

Knight Foundation joins The Tow Foundation as a sponsor for the initiative headed by Columbia University’s Tow Center for Digital Journalism

Tow Center program defends journalism from the threat of mass surveillance ” by Jennifer Henrichsen and Taylor Owen on Knight Blog 

NEW YORK – June 10, 2014 – The Journalism After Snowden initiative, a project of The Tow Center for Digital Journalism at Columbia University Graduate School of Journalism, will expand to further explore the role of journalism in the age of surveillance, thanks to new funding from the John S. and James L. Knight Foundation.

Journalism After Snowden will contribute high-quality conversations and research to the national debate around state surveillance and freedom of expression through a yearlong series of events, research projects and articles that will be published in coordination with the Columbia Journalism Review.

Generous funding from The Tow Foundation established the initiative earlier in the academic year. The initiative officially kicked off in January with a high-level panel of prominent journalists and First Amendment scholars who tackled digital privacy, state surveillance and the First Amendment rights of journalists.

Read more in the press release from the Knight Foundation.

Past Events

Glenn Greenwald Speaks | Join the Tow Center for an #AfterSnowden Talk in San Francisco on June 18, 2014

6

Join the Tow Center for an evening lecture with Glenn Greenwald, who will discuss the state of journalism today and his recent reporting on surveillance and national security issues, on June 18, 2014 at 7pm at the Nourse Theater in San Francisco.

In April 2014, Greenwald and his colleagues at the Guardian received the Pulitzer Prize for Public Service. Don’t miss Greenwald speak in-person as he fits all the pieces together, recounting his high-intensity eleven-day trip to Hong Kong, examining the broader implications of the surveillance detailed in his reporting, and revealing fresh information on the NSA’s unprecedented abuse of power with never-before-seen documents entrusted to him by Snowden himself.  Sponsored by: Haymarket Books, Center for Economic Research and Social Change, Glaser Progress Foundation, Tow Center for Digital Journalism – Columbia Journalism School, reserve your seat for Glenn Greenwald Speaks / Edward Snowden, the NSA, and the U.S. Surveillance State.

Please note: this is a ticketed event. Tickets are $4.75 each.  | Purchase Tickets

This event is part of Journalism After Snowden, a yearlong series of events, research projects and writing from the Tow Center for Digital Journalism in collaboration with the Columbia Journalism Review. For updates on Journalism After Snowden, follow the Tow Center on Twitter @TowCenter #AfterSnowden.

Journalism After Snowden is funded by The Tow Foundation and the John S. and James L. Knight Foundation.

Lauren Mack is the Research Associate at the Tow Center. Follow her on Twitter @lmack.

Past Events

Tow Center Launches Amateur Footage: A Global Study of User-Generated Content in TV and Online News Output

10

Crediting is rare, there’s a huge gulf in how senior managers and newsdesks talk about it and there’s a significant reliance on news agencies for discovery and verification. These are some of the key takeaways of Amateur Footage: A Global Study of User-Generated Content in TV and Online News Output published today by the Tow Center of Digital Journalism.

 

The aim of this research project was to provide the first comprehensive report about the use of user-generated content (UGC) among broadcast news channels. UGC being – for this report – photographs and videos captured by people unrelated to the newsroom, who would not describe themselves as professional journalists.

 

Some of the Principle Findings are:

  • UGC is used by news organizations daily and can produce stories that otherwise would not, or could not, be told. However, it is often used only when other imagery is not available. 40% of UGC on television was related to Syria.
  • There is a significant reliance on news agencies in terms of discovering and verifying UGC. The news agencies have different practices and standards in terms of how they work with UGC.
  • News organizations are poor at acknowledging when they are using UGC and worse at crediting the individuals responsible for capturing it. Our data showed that: 72 percent of UGC was not labeled or described as UGC and just 16 percent of UGC on TV had an onscreen credit.
  • News managers are often unaware of the complexities involved in the everyday work of discovering, verifying, and clearing rights for UGC. Consequently, staff in many newsrooms do not receive the training and support required to develop these skills.
  • Vicarious trauma is a real issue for journalists working with UGC every day – and it’s different from traditional newsroom trauma. Some newsrooms are aware of this – but many have no structured approach or policy in place to deal with it.
  • There is a fear amongst rights managers in newsrooms that a legal case could seriously impact the use of UGC by news organisations in the future

 

This research was designed to answer two key questions.  First, when and how is UGC used by broadcast news organizations, on air as well as online?  Second, does the integration of UGC into output cause any particular issues for news organizations? What are those issues and how do newsrooms handle them?

 

The work was completed in two phases. The first involved an in-depth, quantitative content analysis examining when and how eight international news broadcasters use UGC.  1,164 hours of TV output and 2,254 Web pages were analyzed here. The second was entirely qualitative and saw the team interview 64 news managers, editors, and journalists from 38 news organizations based in 24 countries across five continents. This report takes both phases to provide a detailed overview of the key findings.

 

The research provides the first concrete figures we have about the level of reliance on UGC by international news channels. It also explores six key issues that newsrooms face in terms of UGC. The report is designed around those six issues, meaning you can dip into any one particular issue:

1) Workflow – how is UGC discovered and verified? Do newsrooms do this themselves, and if so, which desk is responsible? Or is UGC ‘outsourced’ to news agencies?

2) Verification – are there systematic processes for verifying UGC? Is there a threshold that has to be reached before a piece of content can be used?

3) Permissions – how do newsrooms seek permissions? Do newsrooms understand the copyright implications around UGC?

4) Crediting – do newsrooms credit UGC?

5) Labeling – are newsrooms transparent about the types of UGC that they use in terms of who uploaded the UGC and whether they have a specific agenda?

6) Ethics and Responsibilities – how do newsrooms consider their responsibilities to uploaders, the audience and their own staff?

 

The full report can be viewed here.

Events

Upcoming Events: #Ferguson

2

#Ferguson: Reporting a Viral News Story

Thursday, October 23rd, 7 PM

Lecture Hall, Columbia Journalism School

We are hosting an event this Thursday with Antonio French of the City of St. Louis, Wesley Lowery of the Washington Post, Alice Speri of VICE News,  and Zeynep Tufekci, Professor at the University of North Carolina.  The panel will be moderated by Emily Bell.  It is open to the public—Register here for tickets: www.reportingferguson.eventbrite.com

 

Research

New Research: What we talk about when we talk about sharing

1

Michael Shapiro has partnered with Michelle Levine to study of how longform readers share.

Shapiro is a co-founder of The Big Roundtable – an entrepreneurial longform journalism site, author of six books and a professor at the Columbia Graduate School of Journalism. Levine is a research scientist in the computer science department of Columbia University. Her research focuses on linguistic and nonlinguistic aspects of communication in human-human and human-computer interaction.

That the longform revival is fully with us is an accepted, if entirely unanticipated consequence of journalism’s digital revolution. How long ago was it, after all, when received wisdom had it that no one would ever read more than a single, desktop screen’s worth of content online?

Everyone, it seems, has jumped on the longform bandwagon – BuzzFeed publishes weekly longform stories, and the field had seen the arrival of such curators as Longform.org and Longreads.com, as well as original content publishers like Narratively, The Atavist, and, the The Big Roundtable. “Snowfall,” has entered the journalistic lexicon – as ever more publications try to replicate the multi-media sensation of the New York Times tale of a fatal avalanche.

Though big yarns are seemingly everywhere, left unanswered is the question of how readers discover them, given all the quicker, edgier, click-bait competitors vying for their attention.

It is one thing to envision how three-quarters of a billion people might come to share “Charlie Bit My Finger,” which is 55 seconds long, and features two impossibly cute children. But what about “Kill Me Now,” Jaime Joyce’s 10,000 word story of the assisted suicide of a emotionally troubled but physically healthy woman that The Big Roundtable and BuzzFeed co-published on the weekend between Christmas and New Year’s 2013 and which drew 500,000 page views.

How do you explain that?

We asked. No one knew. Or rather, what we heard was that it was one of those mysteries of the internet, where a network forms, seemingly out of nowhere and for reasons that could not have been anticipated.

Networks, as sociologist Duncan Watts wrote in “Six Degrees,” are curious phenomena in that there is no consistent way of ensuring their forming. They simply do. Just as a world-wide network formed around “Charlie Bit My Finger” so too did one smaller, but not insignificant one form around “Kill Me Now.” But given how different these two forms of content are, did the networks around them form in different ways?

And if they did, what does that mean for authors, publishers and curators of digital longform journalism?

So it was that earlier this year, we launched an experiment that, with apologies to the late author Raymond Carver, we dubbed “What Do We Talk About When We Talk About Sharing?”

We wanted to understand how networks might form around big stories. Specifically, we wanted to better understand how those stories get disseminated among readers. What channels do they use both to find those stories, and then, if they like them, to spread the word?

Do long narratives get shared most effectively through social media channels like Facebook or Twitter? Or does their length, and with it, the commitment a reader makes to them – a half-hour, or longer perhaps — represent something altogether different in online social interaction.

Analytics, we knew, can tell you a good deal about behavior – as measured by page views, entrances, bounce rate as well as the elusive-to-pin-down time on page.

But we thought it might be more useful to ask the readers themselves, and so sought out Michelle Levine, research scientist in the computer science department at Columbia, to help us devise a study.

Through Medium, Narratively, we at The Big Roundtable joined with the Tow Center to we spread the word that we were looking for readers of longform journalism who would be willing to keep a diary of their reading and sharing habits for three weeks. Those who completed those diaries would receive a Kindle.

The diaries asked: What did you read? When did you read? Where did you get the piece? Who referred it? In what form did you read it? Did you share it? If so, how did you share it and with whom? We also asked whether readers shared any longform stories that they didn’t personally read?

We did not specify a word length to establish a definition of a longread; instead we left it to the participants to self-identify as readers of ambitious narrative stories. We found 64 participants and set them to reading, sharing and chronicling their behavior.

We will publish the study’s results in December. But, the early numbers hint at something intriguing: while a network that forms around a big story might never compete with the kind of viral network that propelled “Charlie Bit My Finger” it may well represent a much more closely knit network, a pop-up community, if you will, of kindred spirits, of readers who seek each other out, one by one.

Events

Upcoming Lectures

1

Race in the Age of Digital Media: A Lecture by Ta-Nehisi Coates

Register here.

“It’s very difficult to know that it doesn’t matter what morals you instill in your children,” she said. “That there are certain people who will never see the value and know who they are.”

And yet African Americans raised in such circumstances understand that in so many ways they are not that far removed from the block. Many of them are just a generation away, and they still have cousins, brothers, and uncles struggling. Their country cannot see this complexity, and thinks of the entire mass as the undeserving poor—which is to say, in the language of our country, criminal.

—“To Raise, Love, and Lose a Black Child,” Oct 8, 2014

Ta-Nehisi Coates

Ta-Nehisi Coates is a national correspondent at The Atlantic, where he writes about culture, politics, and social issues. He is also the author of the memoir The Beautiful Struggle.

In May, The Atlantic published “The Case for Reparations” in both their print edition and as a digital longform multimedia piece. It went viral and provoked discourse across the country. Another article by Coates, “This is How We Lost to the White Man,” explores the generational and ideological rifts in the black community; its title is a quote by Bill Cosby. Last year, Coates’s lively Atlantic blog—a lesson in how to thoroughly engage a community of readers—was named by Time as one of the 25 Best in the World.

Coates is a former writer for The Village Voice, and a contributor to TimeO, and The New York Times Magazine. In 2012, he was awarded the Hillman Prize for Opinion and Analysis Journalism. Judge Hendrik Hertzberg, of The New Yorker, wrote, “Coates is one of the most elegant and sharp observers of race in America. He is an upholder of universal values, a brave and compassionate writer who challenges his readers to transcend narrow self-definitions and focus on shared humanity.”

In Fall 2014, Coates began a new position teaching at the School of Journalism at CUNY. He was previously the Martin Luther King Visiting Associate Professor at MIT.

Q & A to follow lecture, moderated by Emily Bell, Director of the Tow Center.

Sponsored by the Tow Center for Digital Journalism and the Sevellon-Brown Fund with support from the Brown Institute for Media Innovation and the Columbia Journalism Association of Black Journalists.

A small workshop with Ta-Nehisi Coates will follow the lecture. This is open only to current students and faculty, and will be capped at 40. If you are a J. school student interested in attending, please send a short paragraph with the subject header “Coates Workshop” to eb2596@columbia.edu by Monday, October 13th, at noon. If you are faculty, you are welcome to attend–just us let us know your name with the subject header “Faculty Workshop” at eb2596@columbia.edu.

The lecture is free and open to the public.  Register here for a ticket: https://coates.eventbrite.com

Registration recommended.

Past Events

Journalism After Snowden – Upcoming events and activities

2

Journalism After Snowden – Upcoming events and activities

The recent beheadings of journalists Steven Sotloff and James Foley at the hands of the Islamic State of Iraq and the Levant (ISIL) are a horrific reminder that journalists are still murdered brutally by those seeking power and control.

In the United States, journalism faces less viscerally horrific realities, yet critical and timely questions remain for the future of journalism in an age of big data and surveillance. How can journalists protect their sources in an information age where metadata can reveal sources without a subpoena and where the prosecution of unsanctioned leakers is the highest it has been in years? What should journalists do when the tools they rely on for their news reporting facilitate data collection and surveillance?

We are seeking to address these questions in our yearlong Journalism After Snowden (JAS) initiative at the Tow Center for Digital Journalism, in collaboration with the Columbia Journalism Review. Read on to learn how you can get involved and contribute your voice to this important debate.

Attend lectures co-presented by the Tow Center and the Information Society Project at Yale Law School

In partnership with the Information Society Project at Yale Law School, we are hosting a fall lecture series looking at different challenges and opportunities facing journalism.

The first lecture in the series kicked off on Monday, September 29 and featured esteemed lawyer David A. Schulz who lectured on Source Protection: Rescuing a Privilege Under Attack. Schulz discussed the history as well as the current and possible future of the reporter’s privilege – an urgent topic in the wake of US courts’ decisions rejecting journalistic privilege for New York Times reporter James Risen. Watch the archived live-stream recording here.

We will continue the lecture series with events every month this fall. These include:

Investigative Reporting in a Time of Surveillance and Big Data – Steve Coll, Dean & Henry R. Luce Professor of Journalism at Columbia University Graduate School of Journalism

Tuesday, October 21, 12-1:30pm, Yale Law School, Room 122, 127 Wall Street, New Haven

Steve Coll, author of seven investigative journalism books and two-time Pulitzer Prize winner, will discuss the new environment for journalists and their sources. Register here.

Normalizing Surveillance – Ethan Zuckerman, Director, Center for Civic Media at MIT

Tuesday, November 18, 12-1:30pm, World Room, Columbia University

The default online business model – advertising-supported services and content – has normalized mass surveillance. Does that help explain the mixed public reaction to widespread surveillance by governments? Register here.

 

Journalism After Snowden – Jill Abramson, former Executive Editor of the New York Times

Tuesday, December 2, 12-1:30pm, Yale Law School, Room 122, 127 Wall Street, New Haven

Abramson will conclude the lecture series with a Journalism After Snowden discussion at Yale University. Click here to reserve your spot.

All lectures are free and open to the public, but you must RSVP to attend. All events will be live streamed to allow for remote participation.

Educate yourself about digital security and source protection

 

Workshop: Source Protection in the Information Age

Saturday, October 11, 8:30am-5pm, Pulitzer Hall, Columbia University

On October 11, the Tow Center, OpenITP, Mashable and Columbia Law School will host a one-day workshop on the essentials of source protection for journalists in the information age. The workshop will aim to answer practical and theoretical questions facing journalists who wish to implement digital security practices in their workflow.

The morning half of the workshop will feature panels of professional journalists who will discuss how they strategically use technology to both get the story and protect their sources. In the afternoon, attendees will attend small-group trainings on the security tools and methods that make the most sense for their particular publication and coverage area. Click here to register.

National Poll with Pew Research Center

In partnership with Pew, Columbia will conduct a survey of investigative journalists and their use of digital security tools, including what tools journalists use and do not use, how they conduct threat assessments and institutional support they receive.

 

In 2015, the Tow Center’s Journalism After Snowden program continues.

 

Book: Journalism After Snowden: The Future of Free Press in the Surveillance State

In fall 2015, Columbia University Press will publish a book of essays on the implication of state surveillance on the practice of journalism. The book titled Journalism After Snowden: The Future of Free Press in the Surveillance State will seek to be the authoritative volume on the topic and will foster intelligent discussion and debate on the major issues raised by the Snowden affair. Confirmed contributors include: Jill Abramson, Julia Angwin, Susan Crawford, Glenn Greenwald, Alan Rusbridger, David Sanger, Clay Shirky, Cass Sunstein, Trevor Timm, and Ethan Zuckerman, among others. Topics explored will include digital security for journalists, new forms of journalistic institutions, the role of the telecom and tech sectors, emerging civic medias, source protection and the future of investigative journalism, among other topics.

 

Conference: Journalism After Snowden: The Future of Free Press in the Surveillance State

Thursday, February 5, 2015, Newseum, Washington, D.C.

On February 5, 2015, the Tow Center will host a one-day conference at the Newseum in Washington, D.C., with a particular focus on the future of national security reporting in a surveillance state. Structured around the book of essays, this conference will bring together globally recognized panelists to debate the shifting place of journalism in democratic societies and will reveal fresh findings from Pew Research Center about digital security practices of journalists and the impact of surveillance on journalism.

 

Research

How might digital newsrooms utilise tools developed by internet researchers to cover complex issues in new ways?

0

Last week we gave a talk at the Tow Center presenting some of the digital tools and methods developed at the Digital Methods Initiative (DMI) at the University of Amsterdam and the Médialab at Sciences Po in Paris. These tools and methods are informed by the work of leading philosopher and sociologist of science Bruno Latour, Scientific Director the Médialab, who was at Columbia University for a week of talks and seminars.

Our talk highlighted how these tools and methods might be applied in journalism to map complex issues and events – from the rise of extremism to climate change negotiations to global health crises.

We started off by presenting findings from several projects worked on by teams of analysts, developers and designers at the University of Amsterdam, Sciences Po and the Density Design research lab in Milan.

Just to give one example: we examined the rise of the far right in Europe – utilising digital traces from the web and social media. We wanted to find out about how far right groups recruited new members and whether counter-measures being taken were appropriate to these recruitment methods. We also looked at what kinds of issues were most active amongst these groups and whether and how extremist far right groups were connected to more mainstream populist right wing groups.

We profiled right-wing formations in 13 European countries by analysing the linkage patterns and content of key websites. We found out that there are new kinds of far right groups emerging in Europe. These groups had new sets of issues such as the environment, anti-globalisation and rights (such as children’s rights). They also had new recruitment techniques, such as through cultural festivals, lifestyle magazines and social media.

We also found that counter-measures to combat extremism were often not keeping pace with the activities of these groups. For example, in Denmark and Norway the far right are increasingly mobilised by counter-jihadism, while the counter-measures tend to instead concentrate on fighting more traditional forms of fascism.

To undertake these “issue mappings”, researchers at the Médialab and the Digital Methods Initiative commonly utilise web and social media data. This includes not only content shared through digital services but also metadata, relationships and interactions. Digital methods are “methods of the medium” (as DMI Director Richard Rogers puts it) designed to repurpose digital objects such as tags, likes, links and hashtags to study activity around issues.

These research groups have developed dozens of tools to organise digital traces from web and social media services for research.

For example, the Netvizz tool extracts data from different sections of the Facebook platform (profiles, groups, pages) for research purposes. It collects metrics such as page engagement, friendship connections and page-like networks.

Another tool, the Twitter Capture and Analysis Toolset (DMI-TCAT) captures tweets and allows for multiple analyses including: most mentioned hashtags, most shared links, most active users, and so on, but also network analyses: co-hashtag networks, mention networks, follower-followee networks, and so on.

Besides contributing to the field of Internet studies, these tools and methods have also been used to undertake analyses for civil society organisations, foundations and international organisations. The research groups have collaborated with organisations such as Greenpeace, Hope Not Hate, Human Rights Watch, the World Health Organisation and the Open Society Foundations to undertake mappings of various issue and networks in their respective areas of activity.

In our work with the Tow Center we would like to explore to what extent these tools and methods might be useful in journalism as well. In our talk we highlighted several potential opportunities – such as using hyperlink and social network analysis to identify new human sources for stories or to establish the partisanship of a given source.

The discussions after our talk highlighted how these tools and methods might sit within the existing workflows of digital newsrooms. We also spoke about potential privacy implications of some of these tools, and how to ensure that the operations of these tools and methods were transparent to their users.

We are very much looking forward to working with the Tow Center over the coming months to explore these possibilities. If you are interested in finding out more about our work please do get in touch.