Taylor Owen

Research

New Tow Project: Virtual Reality for Journalism

0

Long a figment of technophile imagination, a confluence of technological advances has finally placed in-home virtual reality on the cusp of mainstream adoption. Media attention and developer interest have surged, powered by the release of the Oculus Rift to developers, the anticipated launch of Samsung’s Gear VR, rumored headsets from Sony and Apple, and a cheeky intervention from Google called Cardboard; a simple VR player made of cardboard, Velcro, magnets, a rubber band, two biconvex lenses and a smartphone.

We now have the computational power, screen resolution and refresh rate to play VR in a small and inexpensive headset. And within a year, VR will be a commercial reality. We know that users will be able to play video games, sit court-side at a basketball game and view porn. But what about watching the news or a documentary? What is the potential for journalism in virtual reality?

Virtual reality is, of course, not new. A generation of media and tech researchers used both cumbersome headsets or VR ‘caves’ to experiment with virtual environments. Research focused mostly on how humans engage with virtual environments when the mind tricked them into thinking they are real.  Do we learn, care, empathize and fear as we do in real life?  Do we feel more?  This research is tremendously important as we enter a new VR age; out of the lab and into peoples’ homes.

In addition to the headsets, a second technology is set to transform the VR experience. While initial uses of the Oculus Rift and similar devices have focused on computer generated imagery (gaming) and static 360° images (such as Google Street View), new experimental cameras are able to capture live motion 360° and 3D virtual reality footage.

The kit is made from 12-16 cameras mounted to a 3D printed brace, and then stitched onto a virtual sphere to form a 360 degree virtual environment. While 360 cameras have been around for years, these new kits are also stereoscopic, adding depth of field. They are not yet commercially available, but several are in production, including one by startup Jaunt and another by NextVR that uses six extremely high resolution Red Epic Dragon cameras. We are working with the media production company Secret Location who have also built a prototype, pictured below.

This new camera technology opens up a tremendous opportunity for journalists to immerse audiences in their story and for audiences to experience and connect to journalism in powerful new ways. And this is the focus of a new Tow Center Research project studying and prototyping live motion virtual reality journalism

The project is a partnership between Frontline, The Secret Location, and the Tow Center.  James Milward, the CEO of the Secret Location is leading the production, Raney Aronson, the Deputy Executive Editor of Frontline is leading the field experiment and shoot, Dan Edge is taking the camera into the field, and I am leading the research.  Together, along with Pietro Galliano, Sarah Moughty and Fergus Pitt, we will be running the demo project and authoring a Tow Brief to be published in partnership with MIT documenting the process and lessons learned.

The project recently won a Knight Foundation Prototype Grant.

In short, this project explores the extension of factual film making onto this new platform. Unlike other journalistic VR work, such as the pioneering project by Nonny de la Pena, which has relied on computer-generated graphics, this project will be centered on live video, delivering an experience that feels more like documentary and photo journalism than a console game. There are few examples of this type of journalism. The one that comes closest would Gannett’s recent project for the Oculus Rift called Harvest of Change.

The first phase of the Tow Center VR project has several components.

First, we are testing the equipment needed to capture live motion virtual reality footage.  This includes a prototype 360/3D camera and surround sound audio recording. We recently held a training session for the camera at the Secret Location Toronto office.

 

Twelve Go-Pros mounted in a 3D printed brace.

Twelve GoPros mounted in a 3D printed brace.

The 360° stereoscopic camera, with directional microphone.

The 360° stereoscopic camera, with directional microphone.

Second, we are deploying this video and audio equipment to the field on a story about the Ebola outbreak being directed for Frontline by Dan Edge, a renowned documentary film-maker. This phase will test how the camera can be used in challenging environments. But crucially, it will also explore new journalistic conventions. How do you tell a story in VR? What does narrative look like?  Dan is currently in Guinea with the camera and he will be traveling to Liberia and Sierra Leon in early 2015.

Third we will then be testing new post-production VR processes, including the addition of interactivity and multimedia into the VR environment.

The demo will be launched in the spring alongside the release of the feature Frontline documentary and with an accompanying report documenting the experiment and what we have learned. We will also be hosting a panel on VR journalism and this year’s SXSW featuring James Milward, Nonny de la Pena, and head of Vice News, Jason Mojica.

We are all acutely aware that this emerging practice, while exciting, presents some challenging questions.

For the practice of journalism virtual reality presents a new technical and narrative form. It requires new cameras, new editing and shooting processes, new viewing infrastructure, new levels of interactivity, and can leverage distributed networks in new ways. In addition to these technical innovations, an emerging scholarly discourse is exploring how virtual reality also challenges traditional notions of narrative form. Virtual reality, combined with the ability to add interactive elements, changes the positionality of the journalists, breaking down the fourth wall of journalism. Storytelling is pulled from its bound linear form, and moved to a far more fluid space where the audience has new (though still limited) agency in the experience of the story. This changes how journalists must construct their story and their place in it, and challenges core journalistic assumptions of objectivity and observation.  It also changes how audiences engage with journalism, bringing them into stories in a visceral experiential manner not possible in other mediums.

CBC Interview on Virtual Reality Journalism with Taylor Owen and Nonny de la Pena

More conceptually, virtual reality journalism also offers a new window through which to study the relationship between consumers of media and the representation of subjects. Whereas newspapers, radio, television and then social media each brought us closer to being immersed in the experience of others, virtual reality has the potential to further break down this distance. A core question is whether virtual reality can provide similar feelings of empathy and compassion to real life experiences. Recent work has shown that virtual reality can create a feeling of ‘social presence,’ the feeling that a user is really there, which can create far great empathy for the subject than in other media representations. Others have called this experience ‘co-presence,’ and are exploring how it can be used to bridge the distance between those experiencing human rights abuses and those in the position to assist or better understand conflict.

It is our hope that this initial project, as well as a planned larger multiyear research project, will begin to shed light on some of these questions.

Announcements, announcements-home

Upcoming Events

4

All-Class Lecture: The New Global Journalism

Tuesday, Sep. 30, 2014, 6:00pm

(Lecture Hall)

Based on a new report from the Tow Center, a panel discussion on how digital technology and social media have changed the work of journalists covering international events. #CJSACL

Panelists include report co-authors: 

Ahmed Al OmranSaudi Arabia correspondent at The Wall Street Journal

Burcu BaykurtPh.D. candidate in Communications at Columbia Journalism School

Jessie GrahamSenior Multimedia Producer at Human Rights Watch

Kelly Golnoush NiknejadEditor-in-Chief at Tehran Bureau

Program will be moderated by Dean of Academic Affairs, Sheila Coronel

Event begins at 6 PM

RSVP is requested at JSchoolRSVP@Columbia.edu

Past Events

Tow Center Launches Three Tow Reports on UGC, Sensors, and Data-driven Journalism

12

The Tow Center team is thrilled to launch three new research reports.

Amateur Footage: A Global Study of User-Generated Content in TV and Online News Output, written by Claire Wardle and Sam Dubberley is the result of a major global study into the integration of User Generated Content (UGC) in news output in television broadcasts and online.

Sensors and Journalism, led by Fergus Pitt and including a wide range of contributors, explores how recent advances in sensor networks, citizen science, unmanned vehicles and community-based data collection can be used by a new generation of sensor journalist to move from data analysis to data collection.  The report critically reviews recent prominent uses of sensors by journalists, explores the ethics and legal implications of sensing for journalism, and makes a series of recommendations for how sensors can be integrated into newsrooms.

The Art and Science of Data-Driven Journalism, by Alex Howard provides a recent history and current best-practices in the space of data and computational journalism, based on dozens of interviews with industry leaders.

This research was made possible by grants from the Knight and Tow Foundations.  More details on the Tow Center research program can be found here.

All three reports will be launched at today’s Tow Center conference Quantifying Journalism: Data, Metrics, and Computation which will also include panels on newsroom metrics, data journalism, and sensors as well as talks by a range of Tow Fellow.  Further information on today’s conference can be found here.  All sessions will be broadcast live starting at 9am (EST) here.

Past Events

New Tow Center Research Project Explores Use of User Generated Content by Television News

10

We are excited to announce a new Tow Center research project which will explore the use of User-Generated Content (UGC) by television news.  While news organization have incorporated UGC to varying degrees for over a decade, there has been limited analysis of this increasingly diverse practice.  “Amateur Footage: A global study of User Generated Content in TV news output” will study eight international news channels (BBC World, CNN International, Euronews, France 24, Al Jazeera, Al Jazeera English, NHK World and NTN24) and explore a range of issue related to their use of UGC including workflow, verification, rights, payment and ethics.

The research project is conducted by two Tow Research Fellows – Dr Claire Wardle and Sam Dubberley.  Both have extensive experience and contacts within the international broadcast industry.  Claire is the head of training at Storyful –a news agency of the social media age. She was previously an academic at the Journalism School at Cardiff University and holds a PhD in Communication from the University of Pennsylvania.  Sam Dubberley was head of the Eurovision News Exchange where he managed the desk delivering international news content to over 70 global broadcasters. He holds an MA in Media and Communication from the University of Leicester.

“It is very important that the Tow Knight projects address research issues which are relevant to current practice,” said Emily Bell, Director of the Tow Center. “The trend for major breaking stories to be captured by smart phones by citizens rather than professional camera crews makes the area a timely subject for research and investigation. If we can help establish standards and protocols in this area it will be of benefit to both citizens and news organizations.”

This cross-over academic and professional experience of the news industry gives this research team unparalleled access to international newsrooms and a deep understanding of the role UGC has come to play in news-gathering and output.

Read what fellows Claire Wardle and Sam Dubberley have to say about their project

Announcements, Research

The Tow Center Announces First Round of Tow/ Knight Research Projects

4

Sensor Journalism 3

Over the past few months we have been putting together an exciting new program of research, the Tow/Knight Projects, marking a first for us at the Tow Center, the Columbia Journalism School, and the changing field of journalism. With generous funding from both The Tow Foundation and the John S. and James L. Knight Foundation, we set out to attract proposals from those who care about the rapid evolution in digital journalism, whether are academic researchers or journalists.

The mandate of the Tow/Knight research program is to study three themes that we think are central to the future of journalism: how do you measure the impact of journalism in a digital media environment; what does it meant to be ‘transparent,’ for both media and those the media hold to account; and how can data be better understood and incorporated into the practice of journalism?

Together, we think these projects, and the Tow Fellows  who will be leading them, represent an ambitious start to the program. We want the projects funded by The Tow Foundation and the John S. and James L. Knight Foundation to be an important part of stimulating debate, exploration and change in how we do and how we teach journalism over the coming years.

This represents our first round of awards and Tow/Knight Projects. We will be announcing many more in due course, and we encourage anyone with a proposal to pitch to contact us.

The Sensor Journalism Project,  led by Fergus Pitt, explores how recent advances in sensor networks, citizen science, unmanned vehicles and community-based data collection can be used by a new generation of sensor journalist to move from data analysis to data collection. This project will involve a series of experiments between journalists and scientist to demonstrate the utility of sensor data collection and storytelling. The lessons learned in these experiments will for the core of a sensor journalism curriculum.

The Single-Subject News Network, led by Lara Setrakian and Kristin Nolan will study, build a network of, and develop best practices for, journalist-founders who have designed custom digital outlets focused on one area of coverage. Starting this fall, the Single-Subject New Network will be bringing together 20 leaders in the field to help develop best practices and advice for others.

Newsroom Places and Spaces in a Post-Industrial Age, led by Nikki Usher,  will track prominent a sign of an adaptation to a post-industrial world: newsrooms that have left their buildings for smaller offices, often entirely designed around the idea of a digital-first model, or that have re-purposed their newsrooms to make way for non-journalists, hoping for synergy and a way to fill empty space.

Digital Activism and Citizen Journalism, led by Phil Howard, explores the intersection of citizen journalism and digital activism, developing original data and the visualization and query tools to make the data useful for working journalists and students doing advanced coursework in journalism.

Metrics: Production and Consumption, led by Caitlin Petre, studies of the role of metrics in news organizations and the decision-making and design process of analytics firms.

The Future of Digital Longform, led by Anna Hiatt, seeks to define the new format “digital longform,” articulate criteria by which digital longform journalism is judged and valued, and layout and discuss successful models for soliciting, editing, publishing, and disseminating — and, of course, monetizing — longform content in the digital ecosystem.

In a series of projects on Data Journalism, Jonathan Stray, Nick Diakopoulos, and Alex Howard  will be  conducting a range of academic research, public engagement and development of best practices in the field of data and computational journalism. Jonathan Stray will be exploring algorithm efficiency, and objectivity in document mining, and the privacy implications of location data. Nick Diakopoulos will be writing regularly for the Tow Center, hosting a lecture series on computational journalism, and working on a research projects exploring data storytelling. Alex Howard will be authoring a report on current best practices in Data Journalism and reporting for the Tow Center on issues and stories surrounding the emerging practice.

We also have two co-funded research projects with the Brown Institute for Media Innovation, at Columbia Journalism School and Stanford Engineering School:

CityBeat: A collaboration between The New York World housed in Columbia Journalism School, and the Social Media Information Lab at Rutgers University, this project will look for newsworthy events in the patterns of real-time, geotagged social media feeds.

The Declassification Engine: A partnership between faculty and students in the Departments of History, Statistics and Computer Science at Columbia University, this project will probe the limits of official secrecy by applying natural language processing software to archives of declassified documents to examine whether it is possible to predict the contents of redacted text, attribute authorship to anonymous documents and model the geographic and temporal patterns of diplomatic communications.

All of these projects will be hosting events at Columbia, blogging regularly on the Tow Center blog, and publishing they findings here, on the Tow Center site over the coming year

 

Announcements, Research

The Surveillance Arms Race

1

How western technology companies are helping autocratic governments monitor and control their citizens

There is a new arms race emerging between people who want to communicate freely and securely and governments that want to monitor and limit this communication. In democratic countries, this government interference ranges from the mass monitoring of telecoms to flirtations with cutting off social media flows and shutting down cell towers in protest areas. When autocratic countries face crisis and conflict, however, the battle for control over communication is more troublesome and the risks are more acute.

Linking the interference being run by governments in democratic and autocratic countries is the technologies being deployed by both. And therein lies a paradox: The tools that enable autocratic governments to monitor and control their citizens are produced by western technology companies.

Much like the arms trade, this often creates an awkward scenario in which western countries end up supporting opposition movements that are fighting against technology bought from western countries. Sometimes this collusion backfires in provocative and potentially controversial ways. For example, in Syria, American journalist Marie Colvin and French photographer Rémi Ochlik were killed by a mortar attack that was most likely carried out by targeting their satellite phones. It is widely held that this technology was provided by western companies.

There are many recent examples of this phenomenon, especially within the context of the Arab Spring. High profile technology companies such as Gamma (UK) and FinSpy offered surveillance services to regimes in Egypt, Tunisia, LibyaBahrain, and Syria. Google Engineers discovered contract proposals between Gamma and the Mubarak regime – €250,000 worth of spy technology  that would “enable them [Egypt] to intercept dissidents’ emails, record audio and video chats, and take copies of computer hard drives.”  The SpyFiles operation by Wikileaks and Privacy International further revealed 287 documents indicating that these surveillance companies such French arms dealer, Amesys, sold both spyware and malware technologies (including Trojans) to the Gaddafi regime.

The Citizen Lab at the University of Toronto has uncovered a wide range of examples of complicity between western companies and authoritarian regimes. Most recently, it showed that devices manufactured by Blue Coat Systems, a California-based hardware company, were in use by 61 countries, with histories of human rights abuses. In 2011, it detailed how Syria used Blue Coat software to both censor the Internet and root out particular activities linked to pro-democracy activists.

Western governments use this same type of commercial filtering and monitoring technology to monitor and restrict the online behaviour of their employees. This means that western governments could very well be implicitly supporting private companies that develop technologies that assist the oppressive regimes the oppose.

Indeed, if one were to attend a trade show for such technologies, as a Washington Post journalist recently did, one would find more than 35 United States federal agencies buying the very same technologies as the autocrats. As reported in the Atlantic, Jerry Lucas, who runs a trade show called ISS world, which is known as the “Wiretapper’s Ball”  was asked by the Guardian whether he would be comfortable with Zimbabwe and North Korea buying technology at his trade shows.  He responded, “That’s just not my job to determine who’s a bad country and who’s a good country. “That’s not our business, we’re not politicians … we’re a for-profit company. Our business is bringing governments together who want to buy this technology.”

The U.S. State Department, which has spent $70 million promoting internet freedom abroad, is part of a government that has few regulations on the trade of the technology that prevents such freedom. A bill has been before the United States Congress to prevent the sale of this technology to “Internet-restricting countries” since 2006, but the bill faces implementation challenges, as the list of countries in question now includes most nation states.  And there are other real limits to what western governments can do, due to both the scale of the industry, estimated at $5 billion a year globally, and the limits of contemporary international law.

There have been some positive steps: Last year a U.S. congressional subcommittee passed the Global Online Freedom Act (GOFA), “creating a new transparency standard for Internet companies listed on U.S. stock exchanges and operating in countries that substantially censor or control the Internet.”  The GOFA would force U.S. companies listed on the U.S. Stock exchange to release information on their human rights due diligence.

Of course, these technologies have the potential to be used for both positive and negative impact (they are dual-use). This poses a particular challenge to governments trying to use these technologies for good. For example, the U.S. government is funding Commotion Wireless, a sophisticated hacking project that seeks to enable activists by undermining internet censorship in countries such as Syria and Iran, however the FBI recently warned that these same anonymizing and encryption tools might be “indicators of terrorist activities.”

The question for policymakers, then, is whether anything beyond challenging regulatory measures can be done to overcome the dual-use dilemma, or whether it is simply a fact of life in a radically open operating environment. Whatever the reply, a relatively simple place to start would be to support the development of technologies that empower individuals, rather than enabling the production and trade of tools used for surveillance and oppression.

For example, a Swedish research team recently developed a new tool that allows Tor communication (a tool that anonymizes internet use) to be cloaked within services like Skype in order to circumvent recent changes to the Chinese “firewall” that had compromised those who used those services. Similarly, a team at Columbia University’s Graduate School of Journalism, in partnership with Stanford Computer Science, has built an app called Dispatch that allows for secure communication between journalists and their sources in areas of conflict. Another app, Silent Circle, allows users to send encrypted files of up to 60 megabytes via text message. These are tools that our governments should support. One can even imagine a virtual embassy incentivizing such projects. Too often, however, these surveillance-evading tools ruffle the feathers of autocratic and democratic governments alike.

What we are ultimately seeing is an arms race between oppressive governments and their citizens. It is high time that our democratically elected governments cease supporting, either tacitly or explicitly, the technologies enabling government surveillance.

This post also appeared on the international affairs platform www.opencanada.org

Announcements, CU Community

What the Tesla Affair Tells Us About Data Journalism

16

Consider for a moment two scenarios.

One, a malicious energy reporter tasked with reviewing an electric car decides he is going to fake the review. Part of this fictional narrative, is that the car needs to run out of battery power sometime in the review. He arrives at one of the charging stations, and instead of plugging in, spends a few minutes circling the parking lot trying to drain the battery.

Second, an energy reporter is tasked with reviewing the potential of a new electric car charging network. He arrived at one of the charging location in the dark, and can’t find the charging station. He drives around the parking lot several times looking for it, before finding it and charging his car.

Here is the thing. As Craig Silverman recently pointed out to me, we actually have no idea, based on the interpretation of the review data released by Tesla, which narrative is true. All the data shows is a car driving around a parking lot. And here in lies the principle lesson from the whole Tesla affair: Data is laden with intentionality, and cannot be removed from the context in which it was derived. We do not know, from these data alone, what happened in that parking lot.

David Brooks touched on this very issue in a recent (somewhat overly maligned in my opinion) column on the limits to big data. While his Italian banking analogy felt misplaced, there is actually a large amount of research backing up his general themes. And his point that data struggles with context, is directly relevant to the Tesla dispute:

Data struggles with context. Human decisions are not discrete events. They are embedded in sequences and contexts. The human brain has evolved to account for this reality. People are really good at telling stories that weave together multiple causes and multiple contexts. Data analysis is pretty bad at narrative and emergent thinking, and it cannot match the explanatory suppleness of even a mediocre novel.

In the case of the Tesla review, it is this context that was both poorly recorded by Broder, and which is missing from the Tesla data analysis. This does not mean the analysis is wrong.  But it does mean it’s incomplete.

A couple of further points about the role data played in this journalistic dispute.

First, the early triumphalism against the New York Times in the name of both Telsa and data transparency, were clearly premature. In Tesla’s grand rebuttal, Musk clearly overplayed his rhetorical hand by arguing that the review was faked, but he also overstated both the case he could make with the data, as well as the level of transparency that he was actually providing. Tesla didn’t release the data from the review. Telsa released their interpretation of the data from the review. This interpretation took the form of the graphical representation they choose to give it, as well as the subjective write-up they imposed on it.

What is interesting is that even with this limited and selective data release (ie, without the raw data), entirely different narrative interpretations could be built. Broder and his New York Times team presented one. But Rebecca Greenfield at the Atlantic  provided an even more detailed one. There are likely elements of truth scattered across these three interpretations of the data.  But they are just that – interpretations.

Second, the only person who can provide the needed context to this data is Broder, the reviewer himself. And the only way he can convey this information is if we trust him. Because of his “problems with precision and judgement,” as the New York Times’ Public Editor Margaret Sullivan put it, his trust was devalued. So the missing journalistic piece to this story is lost. Even in a world of data journalism, trust, integrity and journalistic process still matter. In fact, they matter all the more.

Finally, we can’t lose sight of the outcome Tesla wanted from this. They wanted PR for their new vehicle. So amongst all of the righteous indignation, it is worth noting that journalistic principles are not their core objective – good stories about their products are. These may or may not be aligned. This is why, for example, Broder was given significant support and access during his review trip (some of which ultimately proved to be misguided).

An example of this discrepancy surrounds the one clear reality about the Model S (and presumably electric cars in general) that was revealed in the review – they lose significant charge when not plugged in during cold weather. Now, Tesla would rather this fact had not emerged in the review. But it did. And as Steven Johnson pointed out, this has significant implications, specifically for city drivers. For one, it makes parking the Tesla S on the street in the winter (what many urban dwellers would have to do), largely impractical.

So, to recap. The Tesla Affair reinforces that: data does not equal fact; that context matters enormously to data journalism; that trust and documentation are even more important in a world of data journalism; and that companies will continue to prioritize positive PR over good journalism in reviews of their products.