Claire Wardle joins Tow Center as Research Director

We are pleased to announce the appointment of Claire Wardle as research director for the Tow Center. Wardle is currently the senior social media officer at the United Nations High Commissioner for Refugees (UNHCR) and has significant background in academia and journalism research and working with journalists in the field.

“We are delighted that Wardle is joining the Tow Center to lead our research program. She has the rare combination of being a leading figure in academic research in the field and someone who has a firm grasp of the practical needs of newsrooms,” said Emily Bell, director of the Tow Center for Digital Journalism.

Wardle holds a PhD in Communications and an MA in Political Science from the University of Pennsylvania. She taught at the Cardiff School of Journalism, Media and Cultural Studies in the United Kingdom for 5 years. She is one of the world’s experts on user-generated content, and has led two substantial research projects investigating how it is handled by news organizations. She was previously director of News Services for Storyful, and currently sits on the World Economic Forum’s Committee on Social Media. In October 2014, Wardle co-founded Eyewitness Media Hub, a non-profit initiative committed to providing content creators and publishers with continuing research and resources. From October 2013 to May 2014, she was a research fellow at the Tow Center, and published the Tow-Knight Report, “Amateur Footage: A Global Study of User-Generated Content.”

In early January 2015, the Tow Center announced $3 million in new funding from the Knight Foundation. The funding will build on the Tow Center for Digital Journalism’s innovative research and field experiments that explore the changing relationship of journalism and technology, while helping newsrooms and educators meet tomorrow’s information needs.

”The Tow Center has established an extraordinary, distinctive record of research on the intersection of journalism’s democratic function and the new tools of the digital age. The Center’s work has been at once visionary and actionable. Wardle’s research and career is in a similar vein, and we are very fortunate to have her leadership at the school,” said Dean Coll.

Wardle will work with Tow Center Director Emily Bell to develop a new research program for the Center, with a focus on the following four areas:

    • Computation, Algorithms and Automated Journalism will explore ways to bring computer science into the practice of journalism and look at the benefits and challenges that automated reporting tools, large-scale data and impact evaluation metrics, among other solutions, can bring to the newsroom.
    • Data, Impact and Metrics will extend the work of Al Jazeera data journalist Michael Keller and metrics specialist Brian Abelson who are using technology tools and data to explore which stories have impact and ways to reproduce these effects.
    • Audiences and Engagement will study the new relationship between the journalist and the audience, examining the impact and new demands that social media, participatory journalism, crowdsourcing and other developments, are creating in the field.
Films Online
  • Experimental Journalism, Models and Practice will develop field experiments with journalists around themes such as the influence of philanthropy on news startups; surveillance technologies used by and against journalists; applying game design techniques in newsrooms; and gender balance and diversity in journalism.

The Tow Center, established in 2010, works at the intersection of journalism and technology, researching new challenges in the rapidly changing world of digital journalism. The Center has quickly become a leader in the field, spearheading research in areas like sensors and journalism, algorithmic accountability, digital security and source protection for journalists, and the digitally native foreign correspondent.

In conjunction with our announcing Wardle’s appointment, we are pleased to announce a formal call for research fellows and research project proposals. Read more here.

All applications for research fellow positions must be submitted via the Columbia University Recruitment of Academic Personnel Site (RAPS). Read the full job description and submit an application here.

Call for Research Applications – Spring 2015

In January 2015, the Tow Center for Digital Journalism was awarded $3 million in new funding from the Knight Foundation to expand research into the following 4 areas: Computation, Algorithms and Automated Journalism; Data, Metrics and Impact; Audiences and Engagement; and Experimental Journalism, Models and Practice. These 4 areas are expanded upon in greater detail below.

Call for Research Fellows

We are pleased to announce a call for applications for research fellowships to lead and oversee research in these areas. Research fellows will occupy full-time positions with the Tow Center, and direct the course of research projects in their area of specialization. Persons interested in these positions should read more below, and apply via the Columbia University Recruitment of Academic Personnel Site (RAPS), here:

Call for Project Proposals

We likewise are pleased to announce a call for research project proposals. We invite students, researchers, faculty and practitioners in the fields of computer science and journalism to propose potential research projects that fall within our 4 areas of inquiry. We outline the proposal process for research projects in further detail below, and encourage you to adhere closely to the outline noted.

Research Fellows

Research Fellows are employed full-time with the Tow Center for Digital Journalism at Columbia Journalism School in the role of Associate Research Scholar. Research Fellows will be responsible for developing research streams that tackle large and cutting-edge issues in digital journalism through a combination of field research, workshops, events and published articles, briefs, and reports. Applicants must apply through the Columbia University Recruitment of Academic Personnel Site (RAPS) at the link below.

Apply here:


The ideal candidate will have:

  • Demonstrated deep knowledge of the current practice, thinking and future potential in one or more of the Tow Center’s research themes (articulated above)
  • A set of original ideas for research projects in one or more of those themes
  • Demonstrated ability to produce research that is rigorous, authoritative, contains original ideas and analysis, and relevant to the journalism industry
  • A graduate degree or PhD in journalism, communications or a closely related field; candidates with experience working in research & development in journalism, communications or a related field will also be considered.
  • Demonstrated experience with qualitative and quantitative research methods
  • A network in the innovative journalism community, or the journalism research community
  • Demonstrated ability to write and edit well for the Tow Center’s community: working journalists, newsroom managers and leaders, academic researchers, journalism students and educators
  • Demonstrated ability to work in small teams.


Research Fellows will:

  • Work with the Tow Center’s Research Director to lead the Tow Center’s research in one of the four themes
  • Generate ideas for research projects
  • Produce their own research (which may include written reports, journalistic experiments, events, workshops, panels or other forms)
  • Ensure that work by contracted research fellows is delivered at high quality, to an appropriate timetable,
  • Regularly blog on their areas of expertise
  • Help disseminate the Tow Center’s work throughout the digital journalism community

Call for Project Proposals – Scope, Format, and Process

Project Proposal Scope

Research projects range from small to large in scope; a small project might comprise a few months of local field research and writing to produce a white paper on a specific topic at the forefront of the study and practice of digital journalism; a large project might comprise the design and implementation of a technology or process to then be tested and evaluated in an applied journalism context.

Project Proposal Format

Project proposals are reviewed on a 6 month basis. Project proposals should be limited to three pages only. All project proposals should be sent to the Tow Center’s senior research fellow, Fergus Pitt, at: as an MS Word Document, and must include:

  • A brief description of the proposed research activities
  • A distinct explanation of the significance and timeliness of the research, and the anticipated impact and applicability of the resulting findings.
  • Project deliverables, including blog posts, papers, events and programming, applications
  • Project personnel, including short biographies and links to CVs or resumes for all proposed team members, indicating why they are capable of delivering the work
  • Project timeline, clearly indicating proposed timeline for work, project phases, and expected time commitments from team members
  • Project budget, including labor, materials and/or travel

If you have ideas for a research project but would like to meet with a member of our research team prior to the submission of a proposal, contact the Tow Center’s senior research fellow, Fergus Pitt, at:

Project Proposal Review Process

We will announce and advertise open calls for project proposals every 6 months. Our timeline for the next year is projected below.

March 31, 2015: Research Project Proposal period opens
April 30, 2015: Research Project Proposals are due
May 15, 2015: We will have contacted Project Proposal applicants whose work we are interested in

September 15, 2015: Research Project Proposal period opens
October 15, 2015: Research Project Proposals are due
October 30, 2015: We will have contacted Project Proposal applicants whose work we are interested in

Applications received after each closing deadline must be re-submitted by the applicant during the next Research Project Proposal window to receive consideration.

Our Research Themes, expanded:

  • Computation, Algorithms and Automated Journalism
    Computer science now plays a key role in journalism. Whether it is the investigation of algorithms, large-scale data projects, automated reporting tools or detailed metrics evaluations data, a whole new area of professional expertise now sits alongside more traditional journalistic skills. This stream will explore the leading edge of this new practice, work with institutions that are actively and progressively embracing computation, and find innovative ways to bring computer scientists into the practice and development of journalism.
  • Data, Impact and Metrics
    The Tow Center is already known for its work in data journalism, both in research and the classroom, and looks to extend this work over the next three years. Journalists often get into the profession inspired to “make a difference”, but newsrooms are naïve in the science of figuring out which stories have impact, why that’s so and how to reproduce those effects. Developing work around the use of metrics in newsrooms has been a cornerstone of the Tow Center since its inception four years ago. Work is needed in developing new metrics and measurement as our access to data increases and ways of finding and consuming journalism develops.
  • Audiences and Engagement
    Participatory journalism, social journalism, crowdsourcing, open journalism: there are many ways of describing the new pact between the journalist and the audience they seek to engage and inform through new platforms and tools. With experiments taking place in newsroom comment systems and social platforms, publishers are grappling with how to evaluate the direct relationship with audiences. As some news organisations withdraw from commenting or outsource their interactions to other social platforms, we see this area as needing far more research to examine questions such as, What are best practices? Do news organisations which employ more engagement tools improve key metrics? When data scraping is more efficient than crowdsourcing, what is the future of participatory techniques and social media teams?
  • Experimental Journalism, Models and Practice
    This track will develop our field experiments with practicing journalists and newsrooms to test new techniques and technologies. In models and practice there are a number of areas where we are already establishing research or will start new investigations. These may include the transformation of broadcast newsrooms, the influence of philanthropy and non-profit funding on news start-ups, a continuation of our work on surveillance technologies used by and against journalism

Research Audiences

The Tow Center’s different research projects engage particular communities in different ways – both in terms of who produces the work and who consumes it. Researchers and project proposers should let the target audiences inform their plans for research content, activities and dissemination.The practicing digital journalism community forms the Tow Center’s biggest constituency; these are largely working reporters and newsroom managers. For these people the Tow Center should identify, articulate and analyse the new digital journalism ideas that will affect their work over the coming years. If your project is likely to be of interest to this group, you may want to include specific training or seminars, panels at mainstream journalism conferences, alongside publication through the Tow Center’s website and social media channels. To reach these audiences, you may want to pitch articles based on your work to journalism news channels including PBS Mediashift, Nieman Lab, Poynter and the influential industry bloggers.

For news executives the Tow Center aims to provide awareness of significant new ideas, access to our fellows for their expertise, workshop space to focus on ‘the next thing’, and avenues of engagement for their teams can be involved. If your research is likely to be of particular interest to news executives, you may want to include in your deliverables specific meetings or events.

Current journalism students and educators are another constituency for the Tow Center. They are often reached through similar activities and channels to practicing journalists, however, fellows may also choose to publicize and distribute their work directly to educators. We welcome collaboration with faculty, researchers and students at other academic institutions.

For the academic research community, the Tow Center also aims to provide opportunities to publish quality, relevant work on fast timelines. The Tow Center will reach this community through their specialist conferences, including ISOJ and NICAR, and online channels including their email lists and twitter groups.

Research Type: Tone, Language & Output Rhythm

The qualities of the tone, language and output rhythm of the Tow Center’s research flow logically from the Tow Center’s identity, the Center’s mission, and the Center’s audiences: The Tow Center is a great research institution, housed within an excellent journalism school and designed to respond to and serve the rapidly changing journalism industry. The Tow Center’s mission is to help individual journalists, students, faculty, news organizations and policymakers to develop and expand their thinking around and practice in digital journalism.

Tow Center research projects need to serve busy people who consume a lot of information: The tone and language must therefore efficiently convey that the content is insightful, stimulating, backed by quality research that would otherwise be unavailable, and is relevant to the audiences’ professional lives. Different projects will reach slightly different constituencies, so the tone and language can likewise vary, but they should not stray far from the description above.

The output rhythm should similarly serve the constituencies: Although the Center will study topics that remain relevant over the medium to long term, the precise trajectories of journalistic movements may be unpredictable and our audiences need to respond quickly. For that reason, each program of work should have agility; by producing regular audience-facing deliverables every few months and adjusting course as needs be.

Towards a Standard for Algorithmic Transparency in the Media

Last week, on April 21st, Facebook announced a few updates to its algorithmically curated news feed. The changes were described as motivated by “improving the experience” and making the “balance of content the right one” for each individual. And while a post with some vague information is better than the kind of vapid corporate rhetoric Jay Rosen recently complained about, we need to develop more sophisticated algorithmic transparency standards than a blog post along the lines of, essentially, “well, it’ll be a little less of this, and a little more of that.”

That’s why, last month at the Tow Center for Digital Journalism, we convened about fifty heavy hitters from the media industry and from academia to talk about algorithmic transparency in the media. The goal of the workshop was to discuss and work towards ideas to support a robust policy of news and information stewardship via algorithm. To warm things up we heard seven amazing five-minute “firestarter” talks with provocative titles like “Are Robots Too Liberal?”, “Accessible Modeling” and “An Ethical Checklist for Robot Journalism”. The videos are all now online for your viewing pleasure.

But the brunt of the workshop was spent delving into three case studies where we see algorithms operating in the public media sphere: “Automatically Generated News Content”, “Simulation, Prediction, and Modeling in Storytelling”, and “Algorithmically Enhanced Curation”. Participants broke out into groups and spent an hour discussing each of these with a case study facilitator. They brainstormed dimensions of the various algorithms in use that could be disclosed relating to how they work or are employed. They evaluated these dimensions on whether they would be feasible, technically or financially, and what the expected impact and significance to the public would be. And they confronted dilemma like how and whether the algorithm could be gamed.

Based on the numerous ideas generated at the workshop, I boiled things down into five broad categories of disclosable information, including:

Human Involvement

There is a lot of interest in understanding the human component to how algorithms are designed, and how they evolve and are adjusted over time and are kept in operation. Facebook and Google: we know there are people behind your algorithm! At a high level, transparency here might involve explaining the goal, purpose, and intent of the algorithm, including editorial goals and the human editorial process or social context crucible from which the algorithm was cast. Who at your company has direct control over the algorithm, who has oversight and is accountable? Ultimately we want to know who are the authors, or the designers, or the team that created this thing. Who are behind these algorithms?

More specifically, for an automatically written story, this type of transparency might include explaining if there were bits that were written by a person, and if so which bits, as well as if the whole thing was reviewed by a human editor before being published. For algorithmic curation this would include disclosing what the algorithm is optimizing for, as well as rationale for the various curation criteria. Are there any hard-coded rules or editorial decisions in the system?


Algorithmic systems often have a big appetite for data, without which they couldn’t do any fancy machine learning, make personalization decisions, or have the raw material for things like automatically written stories. There are many opportunities to be transparent about the data that are driving algorithms in various ways. One opportunity for transparency here is to communicate the quality of the data, including its accuracy, completeness, and uncertainty, as well as its timeliness, magnitude (when training a model), and assumptions or other limitations. But there are other dimensions of data processing that can also be made transparent such as how it was collected, transformed, vetted, and edited (either automatically or by human hands). Some disclosure could be made about whether the data was private or public, and if it incorporated dimensions that if disclosed would have personal privacy implications. Finally, in the case of automatically written text, it would be interesting to show the connection between the underlying data that contributed to a given chunk of text.

The Model

Modeling involves building a simplified microcosm of some system using data and a method that predicts, ranks, associates, or classifies. This really gets into the nuts and bolts, with many potential avenues for transparency. Of high importance is knowing what the model actually uses as input: what are the features or variables used in the algorithm? Oftentimes those features are weighted: what are those weights? If there was training data used in some machine learning process: characterize the data used for that along all of the potential dimensions enumerated above. Since some software modeling tools have different assumptions or limitations: what were the tools used to do the modeling?

Of course this all ties back into human involvement as well, so we want to know the rationale for weightings and the design process for considering alternative models or model comparisons. What are the assumptions (statistical or otherwise) behind the model and where did those assumptions arise from? And if some aspect of the model was not exposed in the front-end, why was that?


Algorithms often make inferences, such as classifications or predictions, leaving us with questions about the accuracy of these techniques and of the implications of possible errors. Algorithm creators might consider benchmarking the inferences in their algorithms against standard datasets and with standard measures of accuracy to disclose some key statistics.  What is the margin of error? What is the accuracy rate, and how many false positives versus false negatives are there? What kinds of steps are taken to remediate known errors? Are errors a result of human involvement, data inputs, or the algorithm itself? Classifiers oftentimes produce a confidence value and this too could be disclosed in aggregate to show the average range of those confidence values as a measure of uncertainty in the outcomes. The disclosure of uncertainty information would seem to be a key factor, though also a fraught one. What are the implications of employing a classifier that you disclose to be accurate only 80% of the time?

Personalization, Visibility, and the Algorithmic Presence

Throughout the discussions there was a lot of interest in knowing if and when algorithms are being employed, in particular when personalization may be in use, but also just to know for instance if A/B testing is being employed. One participant put is as a question: “Am I being watched?” If personalization is in play, then what types of personal information are being used and what is the personal profile of the individual that is driving the personalization? Essentially, people want to know what the algorithm knows about them. But there are also questions of visibility, which implies maintaining access to elements of a curation that have been filtered out in some way. What are the things you’re not seeing, and conversely what are the things that you’re posting (e.g. in a news feed) that other people aren’t seeing. These comments are about having a different viewpoint into an algorithmic curation different than your own personalized version: to compare and contrast it.  There was also an interest in having algorithmic transparency for the rationale of why you’re seeing something in your feed. What exactly caused an element to be included?


So, there’s your laundry list of things that we could potentially be transparent about. But the workshop was also about trying to evaluate the feasibility of transparency for some of these dimensions. And that was incredibly hard. There are several stakeholders here with poorly aligned motivations. Why would media organizations voluntarily provide algorithmic transparency?  The end game here is about accountability to the public, and transparency is just one potential avenue towards that. But what does ethical accountability even look like in a system that relies on algorithmic decision making?

We can’t really ever expect corporate entities to voluntarily disclose information that makes them look bad. If the threat of regulation were there they might take some actions to get the regulators off their backs. But what is really the value proposition for the organization to self-motivate and disclose such information? What’s really at stake for them, or for users for that matter? Credibility and legitimacy were proffered, yet we need more research here to measure how algorithmic transparency might actually affect these attributes, and to what extent. To be most salient, the value proposition perhaps needs to be made as salient as: you will lose income or users, or some other direct business metric will be negatively impacted unless you disclose X,Y, and Z.

Users will likely be interested in more details when something is at odds, or something goes wrong, like a salient error. The dimensions enumerated above could be a starting point for the range of things that could be disclosed in the event of user demand for more information. Users are likely to care most when they themselves are the error, like if they were censored incorrectly (e.g. in a false positive category). If corporations were transparent with predictions about individuals, and had standards for due process in the face of a false positive event, then this would not only empower users by allowing them to correct the error, but also provide feedback data that improves the algorithm in the future. This idea is perhaps the most promising for aligning the motivations between individuals and corporate actors. Corporations want more and better data for training their algorithms. Transparency would allow the users that care most to find and correct errors, which is good for the user, and for the company because they now have better training data.

There was no consensus that that is a clear and present general demand from users for algorithmic transparency. But this is challenging, since many users don’t know what they don’t know. Many people may ultimately simply not care, but others will, and this raises the challenge of trying to meet the needs of many publics, while not polluting the user experience with a surfeit of information for the uninterested. We need more research here too, along several dimensions: to understand what really matters to users about their consumption of algorithmically-created content, but also to develop non-intrusive ways of signaling to those that do care.

Organizations might consider different mechanisms for communicating algorithmic transparency. The notion of an algorithm ombudsperson could help raise awareness and assuage fears in the face of errors. Or, we might develop new and better user interfaces that address transparency at different levels of detail and user interest. Finally, we might experiment with the idea of an “Algorithmic Transparency Report” that would routinely disclose aspects of the five dimensions enumerated above. But what feels less productive are the vague blurbs that Facebook and others have been posting. I hope the outcome of this workshop at least gets us all on the path towards thinking more critically about whether and how we need algorithmic transparency in the media.

Nicholas Diakopoulos is an Assistant Professor at the University of Maryland, College Park College of Journalism and a member of the UMD Human Computer Interaction Lab (HCIL). 

Privacy and Publication: The Ethics of Open Data, Algorithms and the Internet

Listen to an audio recording of the discussion on SoundCloud

On Thursday, April 16th at 5:00 pm, Professor of Computer Science Steven Bellovin and Assistant Professor of Journalism Susan McGregor hosted a Tow Tea on Privacy and Publication: The Ethics of Open Data, Algorithms and the Internet.

Digital journalism today operates in a global sphere not limited by the geographic boundaries or transmission speed of traditional print publishing, changing the impact of what happens when we hit “publish.” Information discovery and access to services is also increasingly mediated by algorithms whose design is not clearly governed by ethical constraints. Together, Professor of Computer Science Steven Bellovin and Assistant Professor of Journalism Susan McGregor discussed how these issues interact with social and ethical concerns in an era of universal access to online publishing.

Steven Bellovin is the Percy K. and Vidal L. W. Hudson Professor of computer science at Columbia University, where he does research on networks, security, and especially why the two don’t get along, as well as related public policy issues.  Bellovin is the co-author of Firewalls and Internet Security: Repelling the Wily Hacker, and holds a number of patents on cryptographic and network protocols.

Susan McGregor is Assistant Director of the Tow Center for Digital Journalism & Assistant Professor at Columbia Journalism School, where she helps supervise the dual-degree program in Journalism & Computer Science. She teaches primarily in areas of data journalism & information visualization, with a research interests in digital security, knowledge management and alternative forms of digital distribution.

Listen to an audio recording of the discussion at SoundCloud

Tow Responsive Cities Initiative: Interview with Susan Crawford

Read the white paper here on the Tow Center website and here on gitbook

“What fiber is going to do is augment our human capabilities. It will give us an additional layer of life. Online life is not separate from offline life. Fiber will become increasingly relevant to how we live our life–and our lives increasingly will be lived in cities.”

In the fall of 2014, Tow Fellow Susan Crawford convened three workshops aimed at answering the question: What could a university center do to advance policymaking and planning for 21st century cities, focused on facilitating the growth of fiber-optic networks, improving local governance through use of data (transported by fiber), and supporting data-fueled journalism?

Crawford, who will be teaching telecommunications and information law at Harvard Law School beginning in July 2015 and recently co-authored the book The Responsive City: Engaging Communities Through Data Smart Governance (with former Mayor of Indianapolis Stephen Goldsmith) has travelled widely, researching the public policy dimensions of fiber-optic networks around the world and gathering stories.

The push for fiber is not just about speed, but also about increased connectivity. “What fiber makes possible is limitless interaction, when needed, as needed. The deepest human desire is for presence, for connection — and fiber is essentially a pane of glass between you and the rest of the world,” says Crawford.

The workshops also aimed to develop pathways for bringing the best digitally-talented grad students into careers in local governance. Currently, 40%-50% of government employees are ready to retire, but Millennials may not be aware how satisfying and interesting these jobs are. And local governments need digitally-savvy employees to be effective.

We recently spoke with Crawford to learn more about the Responsive Cities Initiative.

Tow: The Responsive Cities Initiative brought together individuals from the civic data movement and journalists. Has this ever happened before?

Crawford: As far as I know, this hasn’t happened before in an intentional way. Many people can see the connections between data and journalism, and many people can see the connections between data and more effective governance, and many people can see the connections between fiber infrastructure and improved quality of life.   But bringing all three of those camps together was the unique contribution of the Tow Responsive Cities Initiative.

Tow: What came out of this? Anything unexpected?

Crawford: These meetings were extraordinarily exciting and uplifting. Everybody involved felt that they were seeing new things in a new way. For example, chief data officers of cities had not been thinking about the centrality of fiber infrastructure to their ability to offer 21st century city services to everyone. And journalists had not been thinking about the excitement of the fiber story. At the same time, journalists are coming around to the idea of seeing themselves as part of the civic infrastructure of a city. They’re learning about surprising possibilities for collaboration between city officials and journalists in getting stories out about what’s changing in cities. It was thrilling to see the connections being made across the table among these highly experienced but equally highly isolated groups of people.

Tow: Do you see any of these connections persisting over time?

Crawford: I know the connections are persisting because I can see it. Data officers are thinking about ways to drive fiber into their cities. Journalists are writing more stories about fiber. At the same time, the moment involving journalists who understand data and can write about it effectively is accelerating. These meetings are indicative of a movement afoot in cities around the world. We just put a name to it.

Tow: Why the focus on cities instead of states?

Crawford: The US is one of the most urbanized countries in the world. Eighty percent of us already live in cities, and more will as the years go by. The point of The Responsive City is that cities are the places where democracy still works. Mayors are largely bipartisan, focused on making lives better for their citizens, and have power within their footprint that exceeds what a president can do. At the same time, people love their cities. They identify with them. The idea is that by having cities be visible to themselves, both inside the walls of city hall and on the street and neighborhoods, you end up with a thick mesh of democratic engagement that in turn makes democracy work better. It wouldn’t be possible without all three of these things–data, fiber, and journalism.

Tow: There has been a lot of discussion about rising inequality in cities in the past decade.   I’m wondering how much digital literacy is part of the Responsive Cities Initiative. Is there concern about enabling all citizens to access fiber? How do you make sure everybody knows how to use the resources?

Crawford: The book is very intentionally entitled The Responsive City rather than the “smart city.” The idea is that the city is there for its people and needs to be responsive to their needs — listening to them, experimenting, ensuring that everyone is included, and that everyone is part of the story.

Digital literacy in general is an essential part of that inclusion narrative. The idea is that treating people with respect and dignity includes ensuring that they have the tools and abilities and insight necessary to have a thriving life. More effective governance, more widespread and cheaper high-speed Internet access, and access to actionable information about where they live are all part of that inclusion process.

Tow: The third session brought the Danish Fiber Network to the Tow Center. Can you tell us about this?

Crawford: That was an extraordinary session. We had 45 people involved —- from members of Parliament and regulators, to people who build fiber systems in Denmark,–come and talk to a group of similarly-situated Americans. The conversation was electric. Danes in rural areas have far better Internet access than Americans. At the same time, they had not been thinking about the possibilities of data use in cities to make citizens’ lives better.

Tow: And what were the reactions?

Crawford: The Americans were astonished at the Danish attitude towards high-speed Internet access. The Danes were astonished that cities get a cut of cable video revenue and so have a built-in conflict of interest favoring the pay TV industry. We learned that the two groups had a lot to work on together. Some of these people have been working in isolation, so pulling them together is magical.

Tow: Why fiber?

Crawford: What everybody needs is unlimited cheap information sharing capacity. That’s the substrate that will make everything else possible. I think of fiber Internet access as the street grid for the responsive city.

Tow: It seems like a part of Responsive Cities Initiative is creating communication channels in order to talk about communication channels.

Crawford: Yes, that’s true! These universities really need to be platforms today, bringing together disparate groups to make progress on important issues and research.  I’m looking forward to continuing to pull these groups together to work on cross-cutting projects of mutual interest.

Tow: The university as an institution has come under a lot of criticism in the past decade. Yet, you seem to have faith in this institution and its ability to address policy issues. Can the university really push forward new ideas — not just ideas that are cloistered in texts and research journals?

Crawford: There is a dramatic shift, particularly at the top level of law and policy schools, to move from theory to practice, from writing articles for other academics to getting hands on experience in a real world context. The Responsive Cities Initiative is part of that.   My plan is to work with as many students as I can to get them into a pipeline of projects and interactions with people inside city hall.

Tow: So, is a big part of this project to develop connections with young people and encourage them to enter local government?

Crawford: One of the most important directions is student engagement and student mentoring —facilitating entry of enormously talented students who understand both project management and data to go into government as a career.

Tow: Is there a more specific form this will take?

Crawford: I’m very concrete about this. This is called clinical learning, experiential learning, placements that are supervised jointly by the city. Having that kind of experience in graduate school can be a ramp to understanding that local government can be a career, and can open up the eyes of government officials too.

Tow: What’s next for this project?

Crawford: I will be a professor at Harvard Law School starting in July. I’m going to be doing what I’m talking about. I’ll be working with cities to place students.   I’ll be helping groups like Next Century Cities with research for mayors about fiber networks, I’ll be working with city data officers on new projects to make local governance more effective, and I will continue collaborating with the Tow Center to see what’s possible using data journalism.


Crawford will be moderating a panel at the Brown Institute for Media Innovation on Tuesday, April 28th, 2015 at 6:30 about the Responsive Cities Initiative.

The panelists will be:

  • Lev Gonick, Chief Executive, OneCommunity
  • Brett Goldstein, Fellow in Urban Science, U. of Chicago, Board Member of CFA
  • Elin Katz, Consumer Council, State of Connecticut
  • Oliver Wise, Director, Office of Performance and Accountability, City of New Orleans

The event is free and open to the public.

RSVP here

Disruptive Power by Taylor Owen

By Sybile Penhirin

Has technology altered power balances in the international affairs space? Are activist groups and individuals more empowered because of new digital tools? And in which ways are established powers, such as states, using technology to make sure they are still in control? These were some of the main questions Taylor Owen discussed at Tuesday’s Tow Center event, launching Owen’s new book Disruptive Power: The Crisis of the State in the Digital Age.


Owen was then joined by three panelists to debate on these issues: Elmira Bayrasli, the founder of the World Policy Institute’s Global Entrepreneurial Ecosystems and the co-founder of Foreign Policy, Interrupted; Eben Moglen, a Law professor at Columbia Law School and the founding director of the Software Freedom Law Center, and Carne Ross a former British Diplomat and the Executive Director of a non-profit diplomatic advisory group called Independent Diplomat.

Owen started by analyzing the situation following Mubarak’s crackdown on the Internet during Egypt’s uprisings in January 2011. He pointed out that even though Internet had been turned off, various aspects of communications came back online. For instance, Telecomix, a net activist group which aims to promote freedom of expression via technology, opened back spaces of communications in Cairo.

“They set up dial-up modem lines around the city, they brought in a bunch a radio enthusiasts to set up frequency conversations between activist groups, they used fax machine networks around the city to send leaflets and tools for getting online,” Owen detailed.

“When you’re looking at what happened at Tahrir Square,” Elmira Bayrasli later said during the panel, “I think Mubarak shutting down the Internet was actually a pivotal point,” she said, adding that before that moment, the voices of people gathering of Tahrir square were dying down.


Other states, such as Syria, did not turn off the Internet to block communications, but rather used it as a massive monitoring tool, Owen explained. But once again, activists got around the obstacles.

In the case of Syria, Telecomix, provided activist groups with surveillance circumvention tools, such as encrypted browsers. They also sent mass emails to Syria with guidelines on how to communicate and broadcast their voices. Anonymous, the hacktivist group, also supported Syrian activists by taking down websites, tapping Syria’s digital infrastructures and going after financial institutions that were supporting the Assad regime, Owen said.

“We have this new layer of actors, we can debate how powerful they are, but this new layer of actors doesn’t fit comfortably in our categories of international actors. They are not states, not corporations, they are not NGOs,” Owen said, adding these actors shared particular fluency in technology and cybertools.  Of course, these technologies are not only available to activist groups and individuals but also to states who have found ways to adapt to the situation. The NSA vast surveillance program is one example of the state’s unchecked power via technology.

But to govern on that massive amount of data, institutions need to be able to store it and draw meaning from it. And this isn’t always possible, Owen said, giving the example of a surveillance program in Afghanistan that the US had shut down because they couldn’t store the data.


However, states have been trying to push back by controlling the entire network.  States are very clever, well informed and they are able to adapt to circumstances very quickly, Carne Ross noted during the panel.

“In China, the communist party has adapted itself, it hasn’t reformed itself, it hasn’t become more democratic it has simply changed the way it looks,” Ross said, “This is my worry about it, that actually we are dealing with an entity, a government which is far more intelligent and far more adaptable that we give it credit for.”

“At the end of the day,” Ross noted, “power is about who controls the territory and who controls the people in that territory.”

Additionally, Bayrasli pointed out that, while social media played a crucial role in the Arab Spring uprisings, these movements lacked clear leadership and plans and therefore did not entirely succeed. “The Facebook revolution and the Twitter revolutions did not go anywhere and the state did come in,” she said.

“What actually has happened is that the state, or the governments, or the American empire have imposed a very deceitful conception, which, with no bad intentions of any kind we’re helping them to achieve,” Eben Moglen said.

Watch Owen’s presentation and the full debate on the video below.


Sybile Penhirin is a reporter living in New York City.  She was awarded the Henry N. Taylor Award when she graduated from Columbia Journalism School in May 2014.  Sybile usually covers local news but is also interested in the on-going transformations of the media landscape.

Columbia Journalism School Showcase – Call for Applications

The Columbia Journalism School Showcase is an open house event where industry partners, university collaborators, and the public are invited to the J-school to see and experience the original and innovative reporting, publishing and presentation work being done at the school – and we want your work to be a part of it.  Selected projects will be installed and presented in the Brown Institute for Media Innovation and the lobby of Pulitzer Hall on Tuesday, May 12th, accompanied by an evening reception.

The application is minimal: Submit a statement of ~200 words describing your project and what makes it innovative, original and important. You should include a link to the work, the name(s) and email address(es) of everyone involved, a description of how you imagine the project being presented in the space and, ideally, a Columbia Journalism School faculty contact.

Please submit your application via the online form available here:  The deadline for submission is Monday, April 13th at 12:00 pm.

The reception and showcase will be held on the evening of Tuesday, May 12.  Master’s projects, class projects, and pieces developed through school-related events and partnerships are all welcome. We are looking for a range of media: video, audio, text, photography and visualization. Group and individual work is eligible, and students may submit as many pieces as they like.

Examples of projects presented at the Columbia Journalism School Showcase: We encourage you to take a look at projects presented at last year’s event at the following link — many of them were later published by major publications:

Selected works will be installed in the Brown Institute for Media Innovation and the lobby of Pulitzer Hall between 1:00 pm and 5:00 pm on Tuesday, May 12, 2015, with a reception and viewing to be held that evening from 6:00 pm – 9:00 pm.  

Note: All Columbia Journalism School Showcase applicants must be available to install and attend their work during the aforementioned times.

If you have any questions regarding the Columbia Journalism School Showcase: Please e-mail the Tow Center Administrator, Elizabeth Boylan, at


Taking Stock of Virtual Reality at SXSW

In addition to wearables, an endless parade of questionable start-up pitches, and branded parties with very long “VIP” lines, Virtual Reality was a hot topic at this year’s SXSW Interactive. Here are a couple of takeaways, and a few summary thoughts from Virtual Reality Journalism, a panel that the Tow Center pitched for this year’s SXSW programming.

First: There is still a ton of smoke and mirrors in this space, particularly around video capture and post production.  While companies like Jaunt are claiming that they have a near out-of-the-box solution for capture, and a “close to” automated system for video stitching, I think we need to question how close the industry really is. A production camera – ideally with far less than the now-standard and highly cumbersome 12-14 lenses – coupled with truly automatic algorithmic stitching, would be a game changer; however, there were few signs that this is immanent. For the near future, live-motion VR will continue to be a fluid space of experimentation on both the hardware and software sides.

Second: I had a chance to try industry heavyweight Oculus Rift’s’ new prototype headset, Crescent Bay, and to experience a handful of demos they built in-house for it. It is in a different class to others headsets I have tried, just the fit and comfort are far better; one could imagine spending significant time in it, which can’t be said for many other products at the moment. More importantly, the screen resolution and CGI demos are remarkable: It is unambiguously clear that Oculus is focused on launching a high-end gaming platform with a suite of highly-curated game engine experiences in their app store. The demos had an incredibly high production value and are markedly better than most VR content out there. While the Samsung Gear VR will, I suspect, continue to be used for live-motion VR, I doubt we will see much of it on the high-end launch device. It should be said that the new HTC Vive was also being demoed, but I didn’t have the opportunity to try it. The new HTC Vine allows for movement in space, and the response to it has been incredibly enthusiastic.

Third: At a few moments, some scholarly reflection on the social implications of VR cut through the marketing noise and bravado. One notable example was a remarkable presentation by Robert Overweg called Digital is Taking Over. A version of the talk can be found here. The central message of the talk, and of Overweg’s work, is that we are beginning to see a merging of digital and physical realities, particularly in how we remember. Memory – he has shown through very smart experiments – is highly susceptible to being augmented. Our brains are just not very good at determining what really happened from augmented version of real events. This, to put it bluntly, has profound implications. I encourage you to review his presentation, it profiles some excellent work.

Finally: A few thoughts about our VR Journalism panel, which included myself, Jason Mojica, Editor-in-Chief of VICE News, who have begun to invest significantly in the VR space; Nonny de la Peña, a journalist and documentary filmmaker who has been at the forefront of non-fiction storytelling in Virtual Reality; and James Milward, the Founder and Executive Producer of Secret Location, a creative digital studio, which, among other formats of digital media and content, has emerged as an early innovator in the practice of Storytelling in VR and is also represented by Chris Milk’s new VR company VRSE.

As I said at the opening of the panel, at the Tow Center we are interested in technologies that don’t just augment the practice of journalism, but that fundamentally change what journalism is, and how it is practiced. Virtual Reality is one of those technologies. We have begun to engage in this space through an experimental project with Raney Aronson at Frontline and Secret Location, and are looking for ways of continuing this work.

Stemming from this initial work, I can make a few suggestions about the potential and challenges for VR and journalism. It is widely known that virtual reality is taking off, but much of the excitement, money, development and conversation is based in the film and gaming worlds. This is fine, however, it is leading to technologies and norms of practice that are designed for the needs of those sectors. I would suggest that these are of very often not aligned with the capacity, constraints and needs of journalism. Take just one simple example: post-production stitching and interactivity coding currently takes weeks. This timeline is inconsequential when shooting a movie or working on a product for the entertainment sector, but it represents a huge problem for journalists.

So one thing we need to do, and that we tried to do on this panel, was to ground the VR conversation in journalism. Doing so, I would suggest, raises some interesting challenges.

  • There are new technical requirements. Live motion VR requires new cameras, new editing and shooting processes, new post production work, new viewing infrastructure, new levels of interactivity. All are nascent and need to be developed. We are ways off from easy solutions to any of them. This needs to be kept front and center.
  • There are new narrative forms emerging in the VR space. At its core, VR changes the positionality of journalists and shatters the 4th wall of journalism. But this comes with real challenges. Journalism is pulled from its bound linear form, and moved into a far more fluid space, where the users has some degree of agency. This changes who the journalist is in the story, and how a narrative is constructed.
  • There are new forms of representation. Journalism is about the representation of events and subjects. Journalists seek to immerse their audience in a context they would otherwise not know or experience. Over centuries and decades, we have developed forms for doing this (writing, photography, film, social media). Each bring us in different ways to the experiences of others. The question for me is whether virtual reality has the potential to further break down this distance. Does VR get us closer to the feelings of empathy, compassion that we feel in real life and does it help us understand events better?

At the moment, the VR conversation is filled with both potential, but also irrational exuberance. This was on full display at SXSW. It is important for there to be voices and research that ground the hype in a conversation about what the form means, how it will be used, and what impacts (both positive and negative) it will have.

Play the News: Max Foxman’s New Report

“I don’t think linear storytelling is a good way to discuss complex problems,” said Heather Chaplin, who teaches at the New School, in a panel discussion on games and journalism at the Tow Center this week. The event launched a new report, Play the News: Fun and Games in Digital Journalism, by Max Foxman, a PhD candidate at Columbia Journalism School.  Foxman was joined by an all-female panel consisting of Heather Chaplin, of the New School, Nicole Leffel of BuzzFeed, and Sisi Wei of ProPublica.  For the report, Foxman interviewed forty journalists, game designers and game journalists.  He got particular access to the Games Team team at BuzzFeed. While news games are not new—crosswords were widely popular in the 1920’s — the proliferation of mobile devices has contributed to the huge success of electronic gaming, Blockbusters include Candy Crush Saga and Angry Birds. By comparison, news companies’ introduction of mobile-optimised products including NYT Now, the Economist’s Espress0 and NPR One, are yet to gain widespread adoption. Foxman cites reasons to use games ranging from fun to more serious implications.  Leffel highlighted the ‘delight’ that games can foster in the consumer—or user.  But Wei and Chaplin were also were quick to point out that designers and creators of news games must think about respect.  Sisi Wei insisted that  “games are a way to foster empathy.”  And Heather Chaplin talked of journalism’s role in globalized world—wondering if games could be a useful storytelling tool in reporting climate change The panel also tackled the technical aspects of producing good news games.  Heather Chaplin recalled working on a collaborative process with two game designers.  The process was challenging, because the culture of game design differs so significantly from the culture of journalism—-there was a constant tension between accuracy and abstraction.  While she often responded to ideas with a response of:  “no, but…” the games designers more often took the approach of  “yes, and…” touting an all-inclusive creative process which seems at odds with journalist values.  Yet, “Most games work because there is something subversive about them,” said Chaplin. The conversation fit nicely with the Tow Center’s efforts to create a space that bridges technology and journalism. Watch the video here Download Foxman’s Slide Presentation here

Russian Freestyle Games 2015

2015 Films Online

Tow Center to Workshop Algorithmic Transparency in the Media

Nicholas Diakopoulos will convene a group of industry and research experts for a crucial workshop of best practice and standards. We have a limited number of openings for students and members of the digital journalism community to join the group. You can apply using this form.

Last month Buzzfeed released a guide describing its editorial standards and ethics. It details everything from sourcing practices and how corrections and errors are handled, to how Buzzfeed views ethical questions, and the ways in which the newsroom there is insulated from business issues. But there’s one conspicuous absence: Where are Buzzfeed’s standards or policies for the data and algorithms that are used in producing their news and information?

As data and algorithms become more deeply embedded in the ways in which news information is gathered, curated, mined, presented, and optimized, it will become increasingly important for news organizations to grapple with how best to be forthright and transparent with their users. How can modern media companies ensure a degree of algorithmic transparency, not only to address ethical concerns, but to bolster trustworthiness with a critical public? And of course this is a question now not only for “traditional” news organizations but also for the platforms and apps that so powerfully mediate the news that we receive.

The Tow Center is envisioning what a standard might look like for news and information organizations wanting to disclose aspects of their data and algorithms to a curious public. To this end we are convening an invitation-only workshop on Algorithmic Transparency in the Media on March 27th, in New York City.

The overall goal of the workshop is to bring together thinkers and doers operating in the space of algorithmic media to discuss the challenges, ethics, and practical steps that should be taken to develop a robust policy of news and information stewardship via algorithm. In particular we hope to make progress in considering how transparency approaches need to be adapted to the domain of algorithms in light of public and corporate interests, legality, and ethics. Stakeholders from traditional as well as new media and platforms, in addition to representatives from industry, academic, legal, and technical perspectives will be represented.

In addition to several short “firestarter” talks from attendees exploring different aspects of this issue, we’ll focus our attention on a few case studies where we see algorithms operating in the public media sphere, including (1) the use of automated writing software to generate content directly from data, (2) the use of algorithms in the curation, selection, recommendation, and personalization of content, and (3) the use of data and models in the communication of stories, such as in simulations or predictions. In small working groups attendees will focus on collectively dissecting and critiquing such case studies relating to algorithms in the media.

There are a handful of openings still available for the workshop, which we are opening up to you, loyal reader, to request an invite. By Friday, Feb. 27th, please visit this form, and briefly tell us why you want to be involved and what you can contribute to the event. We’ll get back to you shortly thereafter.

Nicholas Diakopoulos is an Assistant Professor at the University of Maryland, College Park College of Journalism and a member of the UMD Human Computer Interaction Lab (HCIL). 

He’s been associated with the Tow Center since 2013, woking as a research fellow investigating algorithmic accountability.