Past Events

Journalism After Snowden Lecture Series

0

Journalism After Snowden Lecture Series

Presented by: Columbia University Graduate School of Journalism and the Information Society Project at Yale Law School

 

September 29, 2014

6:00pm-7:30pm, with reception to follow

Brown Institute for Media Innovation at Columbia University Graduate School of Journalism

 

Source Protection: Rescuing a Privilege Under Attack

Speaker: David A. Schulz, Partner at Levine Sullivan Koch & Schulz, LLP

Moderator: Emily Bell, Director of the Tow Center for Digital Journalism

 

Watch Full Lecture 

David Schulz | Outside Counsel to The Guardian; Lecturer, Columbia Law School; Partner, Levine Sullivan Koch & Schulz LLP | @LSKSDave
David Schulz heads the New York office of Levine Sullivan Koch & Schulz, L.L.P. a leading media law firm with a national practice focused exclusively on the representation of news and entertainment organizations in defamation, privacy, newsgathering, access, copyright, trademark and related First Amendment matters. Schulz has been defending the rights of journalists and news organizations for nearly 30 years, litigating in the trial courts of more than 20 states, and regularly representing news organizations on appeals before both state and federal tribunals. Schulz successfully prosecuted access litigation by the Hartford Courant to compel the disclosure of sealed dockets in cases being secretly litigated in Connecticut’s state courts, and the challenge by 17 media organizations to the closure of jury selection in the Martha Stewart criminal prosecution. He successfully defended against invasion of privacy claims brought by Navy SEALS whose photos with injured Iraqi prisoners were discovered on-line by a reporter, and has prevailed in Freedom of Information Act litigation pursued by the Associated Press to compel the release of files relating to detainees held by the Department of Defense at Guantanamo Bay and to records of the military service of President George W. Bush. Schulz is described as an “incredibly skilled” litigation strategist and a “walking encyclopedia” of media law by Chambers USA (Chambers & Partners, 2006), and is recognized as one of the nation’s premier First Amendment lawyers by The Best Lawyers in America (Woodward/White, 2006). He regularly represents a broad range of media clients, including The New York Times, Associated Press, CBS Broadcasting, Tribune Company, and The Hearst Corporation, along with other national and local newspapers, television networks and station owners, cable news networks, and Internet content providers. Schulz is the author of numerous articles and reports, including Policing Privacy, 2007 MLRC Bulletin 25 (September 2007); Judicial Regulation of the Press? Revisiting the Limited Jurisdiction of Federal Courts and the Scope of Constitutional Protection for Newsgathering, 2002 MLRC Bulletin 121 (April 2002); Newsgathering as a Protected Activity, in Freedom of Information and Freedom of Expression: Essays in Honour of Sir David William (J. Beatson & Y. Cripps eds., Oxford University Press 2000); and Tortious Interference: The Limits of Common Law Liability for Newsgathering, 4 Wm. & Mary Law Bill Rts. J. 1027 (1996) (with S. Baron and H. Lane). He received a B.A. from Knox College in Galesburg, Illinois, where he has served for more than twenty years on the Board of Trustees. He received his law degree from Yale Law School, and holds a master’s degree in economics from Yale University.

 

 

Announcements, Past Events, Research

Sensors and Certification

0

This is a guest post from Lily Bui, a sensor journalism researcher from MIT’s Comparative Media Studies program.

On October 20, 2014, Creative Commons Science convened a workshop involving open hardware/software developers, lawyers, funders, researchers, entrepreneurs, and grassroots science activists around a discussion about the certification of open sensors.

To clarify some terminology, a sensor can either be closed or open. Whereas closed technologies are constrained by an explicitly stated intended use and design (e.g., an arsenic sensor you buy at Home Depot), open technologies are intended for modification and not restricted to a particular use or environment (e.g., a sensor you can build at home based on a schematic you find online).

Over the course of the workshop, attendees listened to sessions led by practitioners who are actively thinking about whether and how a certification process for open hardware might mitigate some of the tensions that have arisen within the field, namely around the reliability of open sensor tools and the current challenges of open licensing. As we may gather from the Tow Center’s Sensors and Journalism report, these tensions become especially relevant to newsrooms thinking of adapting open sensors for collecting data in support of journalistic inquiry. Anxieties about data provenance, sensor calibration, and best practices on reporting sensor data also permeate this discussion. This workshop provided a space to begin articulating the needs required for sensor journalism to move forward.

Below, I’ve highlighted the key points of discussion around open sensor certification, especially as they relate to the evolution of sensor journalism.

Challenges of Open Sensors

How, when, and why do we trust a sensor? For example, when we use a thermometer, do we think about how well or often it has been tested, who manufactured it, or what standards were used to calibrate it? Most of the time, the answer is no. The division of labor that brings the thermometer to you is mostly invisible, yet you inherently trust that the reading it gives is an accurate reflection of what you seek to measure. So, what is it that instantiates this automatic trust, and what needs to happen around open sensors for people to likewise have confidence in them?

At the workshop, Sonaar Luthra of Water Canary led a session about the complexities and challenges that accompany open sensors today. Most concerns revolve around accuracy, both of the sensor itself and the data it produces. One reason for this is that the manufacture and integration of sensors are separate processes (that is to say, for example, InvenSense manufactures an accelerometer and Apple integrates it into the iPhone). Similarly, within the open source community, the development and design of sensors and their software are often separate processes from an end user’s assembly—a person looks up the open schematic online, buys the necessary parts, and builds it at home. This division of labor erodes the boundaries between hardware, software, and data, obviating a need to recast how trust is established in sensor-based data.

For journalists, a chief concern around sensor data is ensuring, with some degree of confidence, that the data collected from the sensor is not erroneous and won’t add misinformation to the public sphere if published. Of course, this entirely depends on how and why the sensor is being used. If we think of accuracy as a continuum, then the degree of accuracy can vary depending on the context. If the intent is to gather a lot of data and look at general trends—as was the case with the Air Quality Egg, an open sensor that measures air quality—point-by-point accuracy is less of a concern when engagement is the end goal. However, different purposes and paradigms require different metrics. In the case of StreetBump, a mobile app that uses accelerometer data to help identify potential potholes, accuracy is a much more salient issue as direct intervention from the city would mean allocating resources and labor toward location data a sensor suggests. Thus, creating a model to work toward shared parameters, metrics, resources, and methods might be useful to generate consensus and alleviate factors that threaten data integrity.

There may also be alternative methods for verification and accounting for known biases in sensor data. Ushahidi’s Crowdmap is an open platform used internationally to crowdsource crisis information. The reports depend on a verification system from other users for an assessment of accuracy. One can imagine a similar system for sensor data, pre-publication or even in real time. Also, if a sensor has a known bias in a certain direction, it’s also possible to compare data against an established standard (e.g., EPA data) and account for the bias in reporting on the data.

To further investigate these questions, we can look toward extant models of verification in open science and technology communities. The Open Geospatial Consortium provides a way of thinking about interoperability among sensors, which requires that a consensus around standards or metrics be established. Alternatively, the Open Sensor Platform suggests ways of thinking about data acquisition, communication, and interpretation across various sensor platforms.

Challenges of Open Licensing for Sensors

A handful of licensing options exist for open hardware, including the CERN Open Hardware License, Open Compute Project License, and Solderpad License. Other intellectual property strategies include copyright (which can be easily circumvented and is sometimes questionable when it comes to circuits), patenting (which is difficult and costly to attain), and trademark (an option that offers a lower barrier to entry and would best meet the needs of open source approaches). However, the issue of whether or not formal licensing should be applied to open hardware is still questionable, as it would inevitably impose restrictions on a design or version of hardware that—within the realm of open source—is still susceptible to modification by the original developer or the open source community writ large. In other words, a licensing or certification process would transition what is now an ongoing project into a final product.

Also, in contrast to open software, wherein the use of open code is clearly demarcated and tracked by the process of copying and pasting, it is less clear at what point a user actually agrees to using open hardware (i.e., upon purchase or assembly, etc.) since designs often involve a multitude of components and are sometimes accompanied by companion software.

A few different approaches to assessing open sensors emerged during the workshop:

  1. Standards. A collaborative body establishes interoperable standards among open sensors, working for independent but overlapping efforts. (Targeted toward the sensor.)
  2. Certification/Licensing. A central body controls a standard, facilitates testing, and manages intellectual property. (Targeted toward the sensor.)
  3. Code of conduct. There exists a suggestion of uses and contexts for the sensor, i.e., how to use it and how not to use it. (Targeted toward people using the sensor.)
  4. Peer assessment. Self-defined communities test and provide feedback on sensors as seen in the Public Lab model. (Targeted toward the sensor but facilitated by people using it.)

In the case of journalism, methods of standardization would depend on how much (or little) granularity of data is necessary to effectively tell a story. In the long run, it may be that the means of assessing a sensor will be largely contextual, creating a need to develop a multiplicity of models for these methods.

Preliminary Conclusions

While there is certainly interest from newsrooms and individual journalists in engaging with sensor tools as a valid means for collecting data about their environments, it is not yet apparent what newsrooms and journalists expect from open sensors and for which contexts open sensor data is most appropriate. The products of this workshop are relevant to evaluating what standards—if any—might be necessary to establish before sensors can be more widely adapted into newsrooms.

In the future, we must keep some important questions in mind: What matters most to newsrooms and journalists when it comes to trusting, selecting, and using a sensor tool for reporting? Which sensor assessment models would be most useful, and in which context(s)?

With regard to the certification of open sensors, it would behoove all stakeholders—sensor journalists included—to determine a way to move the discourse forward.

References

  1. Pitt, Sensors and Journalism, Tow Center for Digital Journalism, May 2014.
  2. Bourbakis and A. Pantelopoulos, “A Survey on Wearable Sensor-based Systems for Health Monitoring and Prognosis,” Systems, Man, and Cybernetics, Part C: Applications and Reviews, Vol. 40, Iss. 1 (IEEE, Jan. 2010).
  3. Open Source Hardware Association (OSHWA), Definition page.

announcements-home, Events, Past Events

Recap: Source Protection in the Information Age

0

“Assert the right to report.” That was the mandate Columbia’s Sheila Coronel gave our group of journalists and online privacy and security advocates this past Saturday morning, kicking off a day full of panels and workshop activities on the theme of “Source Protection in the Information Age.” In this post-Snowden age, we were reminded,
as scrutiny from the government and other authority structures intensifies, simple source protection becomes something more. As Aaron Williamson put it succinctly in the morning’s first panel: “Using encryption is activism. It’s standing up for your right to keep communications private.”

How to be an effective activist then? The day’s emphasis was intensely practical: Know your tools. We each had the opportunity to cycle through 6 of 14 available workshops. The spread covered effectively the typical activities journalists engage in: research, communication and writing. That translated into focuses on encrypted messaging via chat and email, location anonymous browsing via Tor, and access to desktop tools like the portable Tails operating system, which enables journalists to securely develop and store their research and writing. Snowden used Tails himself to escape the NSA’s scrutiny. We also received timely reminders about creating secure passwords and remembering that third parties are aware of our every move online.

Throughout, we were reminded of an important fact: You’re only as strong as your weakest participant. So journalists need not only to embrace these tools, they also need to educate their sources in how to use them effectively. They also need to learn how to negotiate the appropriate means and levels of security for communication with sources.

That’s where the user experience of these tools becomes so important. The most successful tools are bound to be those which are quick to install and intuitive to use. If some of those tools were as easy to download and install as a browser or plugin (Tor, Ghostery), others involved complex steps and technical knowledge, which might intimidate some users. That fact underlines the need to apply user-centered design principles to these excellent tools if they’re to be universally adopted. We have to democratize access to them.

Another tension point was the concern that using secure tools actually draws attention to the individual. A valid fear, perhaps, but the answer isn’t to abandon the tools but to employ them more often, even when security isn’t a concern. Increase the signal to noise ratio. On that note, the day was a success. Many of us, who were more or less aware of this issue, left not just enriched with more knowledge, but with laptops sporting a few more tools to empower us as activists.

Robert Stribley is the Associate Experience Director at Razorfish. You can follow him on Twitter at @stribs.

For resources and information about this and future events, visit our Source Protection: Resources page, and follow organizers/hosts Sandy Ordonez, Susan McGregor, Lorenzo Francesi-Biccherai and the Tow Center on Twitter.

announcements-home, Events, Past Events, Tips & Tutorials

Source Protection: Resources

2
We are happy to report that many of the attendees of our October 11 workshop on Source Protection in the Information Age left with a good foundation in digital security, and trainers gained a better understanding of the challenges journalists face in becoming more secure. 
This was a collaboratively organized event that brought together organizations and individuals passionate about the safety and security of journalists. We remain committed to continue supporting this collaboration, and will be planning future workshops. 
If you weren’t able to attend the event, we recommend starting with this brief recap. In addition, we would like to share some resources that you may find useful for continuing to develop your skills and understandings in this area.
Enjoy!
The organizers
(Lorenzo, Susan, Sandy & George)

Workshop Panel Videos

Panel 1: How technology and the law put your information at risk

Runa Sandvik, James Vasile, Aaron Williamson | Moderated by Jenn Henrichsen

Panel 2: Source protection in the real world – how journalists make it work

Online Resources

Workshop Resources

Online Library

Tactical Tech Collective

Tactical Tech’s Privacy & Expression program builds digital security awareness and skills of independent journalists, and anyone else who is concerned about the security risks and vulnerabilities of digital tools. On their website you can find manuals, short films, interactive exercises and well designed how-to’s. 

Upcoming Privacy & Security Events

October 20 | 6:30pm | Tracked Online:  How its done and how you can protect yourself
Techno-Activism 3rd Mondays (TA3M) is a community-run monthly meetup that happens in 21 cities throughout the world. It is a good place to meet and learn from individuals that work on anti-surveillance and anti-censorship issues. The October edition of NYC TA3M will feature former product lead of Ghostery who will explain how 3rd parties track you online, what information they collect, and what you can do to protect yourself. If you would like to be alerted of upcoming TA3m events, contact Sandra Ordonez @ sandraordonez@openitp.org
RSVP: 

Circumvention Tech Festival

The Circumvention Tech Festival will occur on March 1-6 in Valencia, Spain. The festival gathers the community fighting censorship and surveillance for a week of conferences, workshops, hackathons, and social gatherings, featuring many of the Internet Freedom community’s flagship events. This includes a full day of journo security events, which will be conducted both in English and Spanish. This is a great opportunity to meet the digital security pioneers. 
RSVP: 

 

Past Events

Journalism After Snowden – Upcoming events and activities

2

Journalism After Snowden – Upcoming events and activities

The recent beheadings of journalists Steven Sotloff and James Foley at the hands of the Islamic State of Iraq and the Levant (ISIL) are a horrific reminder that journalists are still murdered brutally by those seeking power and control.

In the United States, journalism faces less viscerally horrific realities, yet critical and timely questions remain for the future of journalism in an age of big data and surveillance. How can journalists protect their sources in an information age where metadata can reveal sources without a subpoena and where the prosecution of unsanctioned leakers is the highest it has been in years? What should journalists do when the tools they rely on for their news reporting facilitate data collection and surveillance?

We are seeking to address these questions in our yearlong Journalism After Snowden (JAS) initiative at the Tow Center for Digital Journalism, in collaboration with the Columbia Journalism Review. Read on to learn how you can get involved and contribute your voice to this important debate.

Attend lectures co-presented by the Tow Center and the Information Society Project at Yale Law School

In partnership with the Information Society Project at Yale Law School, we are hosting a fall lecture series looking at different challenges and opportunities facing journalism.

The first lecture in the series kicked off on Monday, September 29 and featured esteemed lawyer David A. Schulz who lectured on Source Protection: Rescuing a Privilege Under Attack. Schulz discussed the history as well as the current and possible future of the reporter’s privilege – an urgent topic in the wake of US courts’ decisions rejecting journalistic privilege for New York Times reporter James Risen. Watch the archived live-stream recording here.

We will continue the lecture series with events every month this fall. These include:

Investigative Reporting in a Time of Surveillance and Big Data – Steve Coll, Dean & Henry R. Luce Professor of Journalism at Columbia University Graduate School of Journalism

Tuesday, October 21, 12-1:30pm, Yale Law School, Room 122, 127 Wall Street, New Haven

Steve Coll, author of seven investigative journalism books and two-time Pulitzer Prize winner, will discuss the new environment for journalists and their sources. Register here.

Normalizing Surveillance – Ethan Zuckerman, Director, Center for Civic Media at MIT

Tuesday, November 18, 12-1:30pm, World Room, Columbia University

The default online business model – advertising-supported services and content – has normalized mass surveillance. Does that help explain the mixed public reaction to widespread surveillance by governments? Register here.

 

Journalism After Snowden – Jill Abramson, former Executive Editor of the New York Times

Tuesday, December 2, 12-1:30pm, Yale Law School, Room 122, 127 Wall Street, New Haven

Abramson will conclude the lecture series with a Journalism After Snowden discussion at Yale University. Click here to reserve your spot.

All lectures are free and open to the public, but you must RSVP to attend. All events will be live streamed to allow for remote participation.

Educate yourself about digital security and source protection

 

Workshop: Source Protection in the Information Age

Saturday, October 11, 8:30am-5pm, Pulitzer Hall, Columbia University

On October 11, the Tow Center, OpenITP, Mashable and Columbia Law School will host a one-day workshop on the essentials of source protection for journalists in the information age. The workshop will aim to answer practical and theoretical questions facing journalists who wish to implement digital security practices in their workflow.

The morning half of the workshop will feature panels of professional journalists who will discuss how they strategically use technology to both get the story and protect their sources. In the afternoon, attendees will attend small-group trainings on the security tools and methods that make the most sense for their particular publication and coverage area. Click here to register.

National Poll with Pew Research Center

In partnership with Pew, Columbia will conduct a survey of investigative journalists and their use of digital security tools, including what tools journalists use and do not use, how they conduct threat assessments and institutional support they receive.

 

In 2015, the Tow Center’s Journalism After Snowden program continues.

 

Book: Journalism After Snowden: The Future of Free Press in the Surveillance State

In fall 2015, Columbia University Press will publish a book of essays on the implication of state surveillance on the practice of journalism. The book titled Journalism After Snowden: The Future of Free Press in the Surveillance State will seek to be the authoritative volume on the topic and will foster intelligent discussion and debate on the major issues raised by the Snowden affair. Confirmed contributors include: Jill Abramson, Julia Angwin, Susan Crawford, Glenn Greenwald, Alan Rusbridger, David Sanger, Clay Shirky, Cass Sunstein, Trevor Timm, and Ethan Zuckerman, among others. Topics explored will include digital security for journalists, new forms of journalistic institutions, the role of the telecom and tech sectors, emerging civic medias, source protection and the future of investigative journalism, among other topics.

 

Conference: Journalism After Snowden: The Future of Free Press in the Surveillance State

Thursday, February 5, 2015, Newseum, Washington, D.C.

On February 5, 2015, the Tow Center will host a one-day conference at the Newseum in Washington, D.C., with a particular focus on the future of national security reporting in a surveillance state. Structured around the book of essays, this conference will bring together globally recognized panelists to debate the shifting place of journalism in democratic societies and will reveal fresh findings from Pew Research Center about digital security practices of journalists and the impact of surveillance on journalism.

 

Past Events

Behind “Losing Ground” II: Q&A with Scott Klein and Al Shaw of ProPublica

3

We  spoke with Scott Klein, assistant managing editor, and Al Shaw, news application developer at ProPublica about the editorial decisions and satellite imagery used in “Losing Ground.”  To read more about the science behind the imagery, read a case study by Fergus Pitt here.

Smitha: Where did the idea for “Losing Ground” come from?

Scott: Mapping is an important part of what we do. We hadn’t done all that much with satellite—maybe an image or two.

I attended the Tow Center’s Remote Sensing Seminar, a two-day conference, and I started thinking, people have used this in news, but no one’s used this to do an investigative story. I started talking to Fergus Pitt about what makes sense for an investigative story and among the things we thought would be interesting was using satellites–using satellites not just to illustrate a story but also as a key way of analyzing information. So Fergus and the Tow Center very generously offered to help us.

Another thing that happened is that we had been working in the Mississippi river delta for a long time, and we knew that there was a big story happening in Southeast Louisiana that was not known outside the delta which was this soil erosion, subsidence, land loss issue.

Al: Bob Marshall at the Lens had been writing about this extensively. I had also read a book called Bayou Farewell, which is a fantastic book about the people who live outside the levees and watching the land disappear, so there were a lot of different inspirations.

Scott: One of the things we wanted to look at with satellites was the Mississippi delta. In our reporting, we found that NOAA, the National Oceanic and Atmospheric Administration, maintained a list of places to remove from maps because of climate change and soil erosion. These are settlements, marshes, bays and rivers and things like that that are now just open water, so that NOAA knows they don’t have to provide weather alerts there, because you don’t have to provide weather alerts for places that don’t exist.

So this is fascinating for us, and we had seen that nobody had covered this before. This really inspired us to start asking the question, what’s being lost?

If you drew a circle around New Orleans, everyone inside the circle knows that this is happening, everyone outside the circle doesn’t.

We said this is a story we want them all to hear, we really think there’s a compelling reason to use satellites to analyze the information and then to use the satellite images to tell the story and let them see what’s being lost, both from a 30,000 foot view, almost literally, all the way down to personal stories.

And that’s when we started thinking, who has done really compelling work in southeast Louisiana? Who has the contacts and the means and the understanding to be able to go and tell personal stories of the people whose settlements, whose culture, whose livelihoods are being destroyed?

Al: Bob Marshall has grown up and spent his entire life down there.

He actually has a boat and knows these bayous by heart. When we first started talking about this, I went down there and asked him, “Say we had satellite imagery— what are the areas that you would like to see most illuminated?” and we kind of went through a map and he said, “This is one of the most important places.” We actually started drawing boxes around places, and that’s how the reporting built up from there.

Smitha: So you had a lot of local knowledge from Bob to rely on.

Scott: That was really a key thing that we didn’t have here in New York. Again, thanks to the Tow Center, Al and some other folks  got some pretty intensive training from a studio that does satellite work as well as from Lela Prasad, a woman who used to work for NASA.

It was sort of like, when the people in the Matrix are asked, if each of you had to buy a helicopter: just over three days, there was an incredibly intensive knowledge exchange and Lela Prasad taught us exactly how to understand and work with satellite images.

And that was when things really got started.

Smitha: How easy was it to pick up this technology? Was it challenging, or was there a steep learning curve?

Al: It is somewhat of a steep learning curve. When you download these images right from NASA, they don’t look like anything really.   And the tools are still somewhat rudimentary. You kind of have to cobble stuff together. We had to actually write a bunch of new software.

Scott: Now there are also a number of satellites with different capabilities and different caveats that you have to understand.

Al: The sediment kind of looked indistinguishable from the land, so that didn’t tell our story that well. We had to color correct the images because satellites don’t make water look like what we think of as water, like bright blue, or bright green for land. The images are also shot on different days, shot from different angles, there’s cloud cover. To turn that into the big image you see on the site took a fair amount of processing.

The 1922 map, that’s a United States Geological Service (USGS) map we got from Louisiana State University, and we geo-referenced it, which basically means adding geographic data to their standard image in order to line up with the 2014 land set satellite image.

Smitha: So there was a lot of stitching together geospatial information from very different sources as well.

Al: In the intro slideshow, there are overlays from the levees, the canals, oil and gas infrastructure, and pipelines, which came from different sources in USGS, from government sources in Louisiana, from a dissertation that an LSU student had done. So a lot of different sources went into it.

Smitha: One thing that really struck me about the way the piece is put together is that it’s very simple. Even though there is a lot of complex information, it feels very easy to navigate.   How did you approach the issue of usability, the user’s experience of the site?

Scott: From the beginning we knew we wanted to make something where the maps were the main kind of metaphor. The maps were going to be the biggest thing on the page, the central spine of the interaction.

A few weeks ago we did some semi-formal user testing. We put a tweet out and sent an e-mail asking for volunteers to come and take a look at this. We watched them navigate through a draft of the app on a big screen and we asked them questions. It taught us a lot, and we cut out a lot of stuff. There was a whole different navigation metaphor that we left out.

Smitha: What are your hopes for the policy implications of this piece?

Scott: Our job as journalists is to inform the debate and give people as much information as they need to make really good decisions. Our hope for the policy piece is that we inform the debate in Louisiana. More importantly, I think focusing national attention on this will bring it needed scrutiny.

Smitha: What challenges are unique to working with satellite imagery?

Al: The raw size of the data is a big one. We went through tens of gigabytes of satellite imagery and other sources and being able to chew through that is a big barrier itself.

Smitha: The collaboration with The Lens, based out of New Orleans—-is this something you do a lot, working closely with local papers?

Scott: We do it very extensively. It’s a long tradition for us to work with local newsrooms.

Smitha: Has the cost of working on an interactive piece like this been prohibitive at all, or has it been a worthy investment?

Scott: It has absolutely been a worthy investment. The only costs have been staff time, travel—the imagery is all from the government.

Smitha: Do you have any projects similar to “Losing Ground” currently in the works or in gestation?

Scott: We do! We can’t talk about it. And this isn’t even the end of the Louisiana project, so we will have more to come.

 

 

Past Events

Behind ProPublica’s “Losing Ground”

2

Behind ProPublica’s “Losing Ground”

Today, almost nine years after Hurricane Katrina made landfall in Louisiana, ProPublica is launching “Losing Ground,” a mixed media piece that shows the erosion of the Louisiana coastline using maps, photographs, text, a timeline, and audio interviews of residents.

The piece relies heavily on satellite imagery and is the product of a unique collaboration between ProPublica, a New York based newsroom and The Lens, a public interest newsroom based in New Orleans. The project also represents a first for the Tow Center, which helped to train journalists at ProPublica in remote sensing techniques as part of its extensive Sensor Newsroom research program, which was a Tow-Knight research project.  It is the first time the Tow Center has collaborated directly with a newsroom as part of its field research.

Tow’s Sensor Newsroom research, led by Fergus Pitt, has been multi-fold. This past February, the Tow Center taught the ProPublica team the fundamental physics of remote sensing, key concepts of temporal and spatial resolution, and did some practical exercises to analyze ground moisture levels. This was followed by the release of the report “Sensors and Journalism,” which covers everything from the legal dimensions of using sensors to case studies of newsrooms across the country.  This summer, Columbia was the first J-School in the country to offer a course, the Sensor Newsroom, to teach students how to use sensor technology to enrich their reporting and storytelling abilities.

Accompanying “Drowning Louisiana” is a case study Propublica, Satellites, and the Shrinking Louisiana Coast,” by Pitt detailing the technical process behind the production of the piece.

 

Louisiana

 

Since Hurricane Katrina, the city of New Orleans and the bayous of Louisiana have become part of the American cultural imagination. The HBO series Treme depicts the lives of characters in New Orleans in the immediate aftermath of the storm. The film the Beasts of the Southern Wild is about a young girl in a fictional bayou community (affectionately called “the bathtub”) that is almost wiped out by a storm and subsequent salt-water erosion.  The novel Salvage the Bones by Jesmyn Ward chronicles one family’s struggle during the ten days preceding Hurricane Katrina.

There is one constant in the lives of these characters: the persistent fear of flooding, of erasure.

In the closing chapter of Salvage the Bones, the protagonist reflects, “She left us a dark Gulf and salt-burned land. She left us to learn to crawl. She left us to salvage. Katrina is the mother we will remember until the next mother with large, merciless hands, committed to blood, comes.”

Yet, nothing captures the reality of this fear as well as “Drowning Louisiana.” As you hover over maps showing the land loss between 1930 and 2010, the affect is chilling. This is no longer a potentiality, but rather inevitability. In audio clips, residents describe hometowns that have already been swallowed up by the Gulf of Mexico.

“ProPublica’s remote sensing work is a great example of how reporters have all these powerful new tools at their disposal. The Sensors + Journalism report we just released shows how they’re used by other top newsrooms and gives readers an overview of how to think about them,” says Pitt.

Emily Bell, director of the Tow Center says, “Through the Tow Knight research projects we have been building a type of research methodology into digital journalism which investigates emerging practices and technologies and encourages collaboration with newsrooms.  Ultimately we want to be a place where academics, practitioners and students can learn together through actual application of new techniques and and then share the findings with the broader journalism community.”

An upcoming research project by Tow will introduce virtual reality technology to newsrooms in New York in order to create vivid experiences of current affairs.

Read more about the editorial decisions and satellite imagery in “Losing Ground” in a Q&A with Scott Klein and Al Shaw of ProPublica. 


Interested in participating in or proposing research relating to emerging technology, newsrooms, or new media? Send an e-mail to TowCenter@Columbia.edu.

Past Events

Why We Like Pinterest for Fieldwork: Research by Nikki Usher and Phil Howard

7

Nikki Usher, GWU

Phil Howard, UW and CEU

7/16/2014

Anyone tackling fieldwork these days can chose from a wide selection of digital tools to put in their methodological toolkit. Among the best of these tools are platforms that let you archive, analyze, and disseminate at the same time. It used to be that these were fairly distinct stages of research, especially for the most positivist among us. You came up with research questions, chose a field site, entered the field site, left the field site, analyzed your findings, got them published, and shared your research output with friends and colleagues.

 

But the post-positivist approach that many of us like involves adapting your research questions—reflexively and responsively—while doing fieldwork. Entering and leaving your field site is not a cool, clean and complete process. We analyze findings as we go, and involve our research subjects in the analysis. We publish, but often in journals or books that can’t reproduce the myriad digital artifacts that are meaningful in network ethnography. Actor network theory, activity theory, science and technology studies and several other modes of social and humanistic inquiry approach research as something that involves both people and devices. Moreover, the dissemination of work doesn’t have to be something that happens after publication or even at the end of a research plan.

 

Nikki’s work involves qualitative ethnographic work at field sites where research can last from five months to a brief week visit to a quick drop in day. She learned the hard way from her research for Making News at The New York Times that failing to find a good way to organize and capture images was a missed opportunity post-data collection. Since then, Nikki’s been using Pinterest for fieldwork image gathering quite a bit. Phil’s work on The Managed Citizen was set back when he lost two weeks of field notes on the chaotic floor of the Republican National Convention in 2000 (security incinerates all the detritus left by convention goers). He’s been digitizing field observations ever since.

 

Some people put together personal websites about their research journey. Some share over Twitter. And there are plenty of beta tools, open source or otherwise, that people play with. We’ve both enjoyed using Pinterest for our research projects. Here are some points on how we use it and why we like it.

 

How To Use It

  1. When you start, think of this as your research tool and your resource.   If you dedicate yourself to this as your primary archiving system for digital artifacts you are more likely to build it up over time. If you think of this as a social media publicity gimmick for your research, you’ll eventually lose interest and it is less likely to be useful for anyone else.
  2. Integrate it with your mobile phone because this amps up your capacity for portable, taggable, image data collection.
  3. Link the board posts to Twitter or your other social media feeds. Pinterest itself isn’t that lively a place for researchers yet. The people who want to visit your Pinterest page are probably actively following your activities on other platforms so be sure to let content flow across platforms.
  4. Pin lots of things, and lots of different kinds of things. Include decent captions though be aware that if you are feeding Twitter you need to fit character limits.
  5. Use it to collect images you have found online, images you’ve taken yourself during your fieldwork, and invite the communities you are working with to contribute.
  6. Backup and export things once in a while for safe keeping. There is no built-in export function, but there are a wide variety of hacks and workarounds for transporting your archive.

 

What You Get

  1. Pinterest makes it easy to track the progress of the image data you gather. You may find yourself taking more photos in the field because they can be easily arranged, saved and categorized.
  2. Using it regularly adds another level of data as photos and documents captured on phone and then added on Pinterest can be quickly field captioned and then re-catalogued, giving you a chance to review the visual and built environment of your field site and interrogate your observations afresh.
  3. Visually-enhanced constant comparative methods: post-data collection, you can go beyond notes to images and captions that are easily scanned for patterns and points of divergence. This may be going far beyond what Glaser and Strauss had imagined, of course.
  4. Perhaps most important, when you forget what something looks like when you’re writing up your results, you’ve got an instant, easily searchable database of images and clues to refresh your memory.

Why We Like It

  1. It’s great for spontaneous presentations. Images are such an important part of presenting any research. Having a quick publically accessible archive of content allows you to speak, on the fly, about what you are up to. You can’t give a tour of your Pinterest page for a job talk. But having the resource there means you can call on images quickly during a Q&A period, or quickly load something relevant on a phone or browser during a casual conversation about your work.
  2. It gives you a way to interact with subjects. Having the Pinterest link allows you to show a potential research subject what you are up to and what you are interested in. During interviews it allows you to engage people on their interpretation of things. Having visual prompts handy can enrich and enliven any focus group or single subject interview. These don’t only prompt further conversation, they can prompt subjects to give you even more links, images, videos and other digital artifacts.
  3. It makes your research interests transparent. Having the images, videos and artifacts for anyone to see is a way for us to show what we are doing. Anyone with interest in the project and the board link is privy to our research goals. Our Pinterest page may be far less complicated than many of our other efforts to explain our work to a general audience.
  4. You can disseminate as you go. If you get the content flow right, you can tell people about your research as you are doing it. Letting people know about what you are working on is always a good career strategy. Giving people images rather than article abstracts and draft chapters gives them something to visualize and improves the ambient contact with your research community
  5. It makes digital artifacts more permanent. As long as you keep your Pinterest, what you have gathered can become a stable resource for anyone interested in your subjects. As sites and material artifacts change, what you have gathered offers a permanent and easily accessible snapshot of a particular moment of inquiry for posterity.

 

Pinterest Wish-list

One of us is a Windows Phone user (yes really) and it would be great if there was a real Pinterest app for the Windows Phone. One touch integration from the iPhone, much like Twitter, Facebook, and Flicker from the camera roll would be great (though there is an easy hack).

 

We wish it would be easier to have open, collaborative boards. Right now, the only person who can add to a board is you, at least at first. You can invite other people to join a “group board” via email, but Pinterest does not have open boards that allow anyone with a board link to add content.

 

Here’s a look at our Pinboards: Phil Howard’s Tech + Politics board, and Nikki Usher’s boards on U.S. Newspapers. We welcome your thoughts…and send us images!

 

 

 

 

Nikki Usher is an assistant professor at the George Washington University’s School of Media and Public Affairs. Her project is Post Industrial News Spaces and Places with Columbia’s Tow Center on Digital Journalism. Phil Howard is a professor at the Central European University and the University of Washington. His project is a book on Political Power and the Internet of Things for Yale University Press.

 

Announcements, Events, Past Events, Research

Digital Security and Source Protection For Journalists: Research by Susan McGregor

3

EXECUTIVE SUMMARY

The law and technologies that govern the functioning of today’s digital communication systems have dramatically affected journalists’ ability to protect their sources.  This paper offers an overview of how these legal and technical systems developed, and how their intersection exposes all digital communications – not just those of journalists and their sources – to scrutiny. Strategies for reducing this exposure are explored, along with recommendations for individuals and organizations about how to address this pervasive issue.

 

DOWNLOAD THE PDF

GitBookCover

 

 

 



Order a (bound) printed copy.

Comments, questions & contributions are welcome on the version-controlled text, available as a GitBook here:

http://susanemcg.gitbooks.io/digital-security-for-journalists/

DIGITAL SECURITY AND SOURCE PROTECTION FOR JOURNALISTS

Preamble

Digital Security for Journalists A 21st Century Imperative

The Law: Security and Privacy in Context

The Technology: Understanding the Infrastructure of Digital Communications

The Strategies: Understanding the Infrastructure of Digital Communications

Looking Ahead

Footnotes

 

Announcements, Events, Past Events, Research, The Tow Center

Tow Center Program Defends Journalism From the Threat of Mass Surveillance

3

Knight Foundation supports Journalism After Snowden to ensure access to information and promote journalistic excellence. Below, Jennifer Henrichsen, a research fellow at the Tow Center for Digital Journalism at Columbia Journalism School, and Taylor Owen, research director, write about the expansion of the program.

We’ve long known that it’s easy to kill the messenger. Journalists are murdered all around the world for speaking truth to power. But, it wasn’t until recently that we realized how mass surveillance is killing source confidentiality, and with it, the very essence of journalism. By taking away the ability to protect sources—the lifeblood of journalism—surveillance can silence journalists without prosecutions or violence. Understanding the implications of state surveillance for the practice of journalism is the focus of our project, Journalism After Snowden.

We’re in an age of mass surveillance and it’s expanding. Metadata can reveal journalists’ sources without requiring officials to obtain a subpoena. Intelligence agencies can tap into undersea cables to capture encrypted traffic. Mobile devices, even when powered off, can be remotely accessed to record conversations. The extent of manipulation and penetration of the technology that journalists rely on to communicate with their sources makes it difficult—if not impossible—for journalists to truly protect them. And without reasonable assurances of protection, sources will invariably dry up, cutting off a supply of information about government wrongdoing which for more than a century has been a critical balance of power in democratic governance. And journalism without sources is not journalism at all; it’s public relations for the powerful.

So what can we do? With generous funding from The Tow Foundation and Knight Foundation, the Tow Center for Digital Journalism at Columbia Journalism School seeks to address what we think are three core challenges facing journalism in the age of state surveillance.

First, more journalists and news organizations need to take source protection seriously. They need to conduct risk assessments and embrace digital security tools and techniques. They need to arm themselves with knowledge of their legal rights—or lack thereof—and conduct a thorough audit of how the technology platforms they use retain and release data. And more news organizations should consider implementing technologies likeSecureDrop, an open-source whistleblower submission system, which enables media organizations to more securely accept documents from anonymous sources.

Second, we need to strengthen collaboration between journalists and technologists. Bridging this professional divide is critical to ensuring journalists can reach out to trusted technologists for expertise and technologists can better understand the challenges that journalists face and create more user-friendly tools that address their needs. Journalists also need to be more skeptical when problems with their devices arise. Rather than immediately running to the Apple store to wipe their devices (which can actually hide the problem), journalists should enlist technologists to help determine if there is a more sinister cause than simple equipment malfunction. Researchers and technologists also need to join together to develop a system to collect and anonymize data showing digital attacks against journalists so researchers can analyze these attacks, ascertain potential trends and identify possible solutions.

Third, journalist educators and journalism schools need to discuss how to integrate digital security curricula into their classrooms. Currently, most journalism professors provide ad hoc digital security education—if they do at all. Digital security education needs to become more mainstream in journalism classrooms to ensure emerging journalists are cognizant of the real risks they and their sources face in this changing environment, and to foster the confidence they need to better protect both.

The Journalism After Snowden Project seeks to contribute high-quality conversations and research to strengthen the national debate around state surveillance and freedom of expression. The initiative will feature a yearlong series of events, research projects and articles that we will publish in coordination with Columbia Journalism Review, and it will forge new partnerships with the individuals and organizations that are already doing great work in this space. These will include: a workshop bringing together technologists and journalists in San Francisco, a public lecture by Glenn Greenwald; a lecture series in partnership with the Yale Information Society Project; an edited volume likely to be published by Columbia University Press; a poll on the digital security practices of investigative journalists to be published with Pew Research Center; several research reports on digital security teaching and training for journalists; and a conference on national security reporting in Washington, D.C.

By tackling these challenges together, we’ll help to prevent the death of journalism at the hands of mass surveillance and ensure journalism after Snowden is stronger, not weaker.