essay helper
SHARE THIS PAGE

Blog

categories

tags

RECENT FEATURED POSTS

I pledge allegiance, to the idea: The frustration of a workshop

The first thing I ever crowdsourced was in 2011. And I did it out of necessity. I was a Patch editor and I covered a city that shared a border with Detroit. It’s about four square miles, just over 20,000 people. In the early evening of a hot July day, chunks of the city’s power went out. A few hours later, nearly every resident and business didn’t have power. The outage would last three days. It also happened during the hottest stretch of days metro Detroit had seen in more than a decade. For the entirety of the blackout, the temperature would hover around 100 degrees.

It started with a comment on the site’s Facebook page asking if I knew something about a power outage. I didn’t. So I started reporting. At first I didn’t really ask the community for anything. But I quickly realized that the outage was big, that the community wanted to talk about it with me and with each other, and that I probably wouldn’t be able to report on everything all myself. So the crowdsourcing started small. Something like: “Is your power out?” Then: “What are the utility workers telling you?” Then: “What are your questions? Let me report that for you.” This didn’t happen in any structured way. I was mostly using social and aggregating the comments, stories, information into pages on the site. It was messy and it was amazing.

Through this messy experiment, I saw how a community could develop around an issue, and help me tell a story. It was powerful. It changed how I thought about journalism, the roles we play as both the community and the journalist, and how we interact with each other. This became that creative spark in my professional life. And I’ve tried to incorporate crowdsourcing in my work ever since.

So, for me, last Friday was important.

The Tow Center for Digital Journalism officially kicked off its guide to crowdsourcing — authored by Jeanne Pinder, Jan Schaffer, and Mimi Onuha. It’s an excitingly thorough report on crowdsourcing, complete with a definition, a taxonomy, and some really great examples of how organizations are practicing crowdsourcing at the highest level (including ProPublica, where I work). It also showcases the breadth of the practice. The kickoff, held at the New York Times building, brought together a panel of some of the people working on these projects: CNN’s John D. Sutter, WNYC’s Jim Schachter, The New York Times’ Sona Patel, and, my boss, ProPublica’s Amanda Zamora. (A nice recap of takeaways is here.)

A workshop followed. We broke up into groups and had about 20 minutes to find answers to four pretty big questions: defining and measuring impact; moderation and verification; how to get more crowdsourcing projects up off the ground in newsrooms; and creating projects that the community actually needs and wants.

The conversations were solid. The event was a success. Business cards exchanged. We followed each other on Twitter.

But now what?

I left Friday feeling equal parts inspired and frustrated. Not with the event, not with the great work from the authors, not with anything that took place within those four hours. That was the inspired half. The frustrated half comes from not knowing how impactful this all will be in the newsrooms and the work of the attendees, or how the guide can help shape the idea of crowdsourcing, or how all the smart things everyone said will exist outside of that room. I want to make clear that this frustration isn’t anchored in a lack of hope. I think it’s a great time for crowdsourcing — newsrooms are starting to think about engagement beyond distribution, more tech is available to make the practice easier, and the guide itself is a huge step in the right direction. My frustration, I think, comes from my impatience and my love for the practice.

So I’m left with questions: What did everyone leave with and what does everyone plan on doing once they are at their desk? How do we know that the work we evangelize is making its way into the newsroom? How do we know that we’re making a difference or changing the mindset of the uninitiated editor, reporter, program director?

And the answer: I don’t think we really know right now. It’s a waiting game that boils down to who’s going to produce what? Who is going to take these ideas and create something?

And I want to make sure this happens in anyway that I can. Working alongside Amanda, ProPublica’s Crowd-Powered News Network brings together crowd-powered news nerds and novices to talk with each other, bounce ideas off one another, and dig into projects. It’s a small step toward turning this into a movement (dare I use that word?) that aligns with what I think is a growing social and collaborative world. And I believe it starts with gathering, supporting and showcasing this type of work.

But that still leaves the question: “Now what?” For you, I would say read the guide if you haven’t. Join CPNN if you haven’t. Get inspired and do something crowd-powered … if you haven’t.

I also think the answer, at least for me, is to stay impatient. Keep pushing. Keep helping. Keep asking: “Who’s going to produce what?” Here’s an idea to make sure that happens (not to get all Joseph McCarthy on you). Maybe the next workshop you attend ask everyone to make a pledge of allegiance to an idea. State your name, what you’ll do, and your contact info for the follow up.

Here’s my pledge: I’m Terry Parris Jr. And I pledge to highlight as much crowd-powered work as I can, and make myself available to anyone who needs help fleshing out or developing a crowd-powered idea. Oh, I also pledge to stay impatient. You can follow up with me on our CPNN board.

 

Terry Parris Jr. is ProPublica’s community editor. He’d crowdsource everything if you let him. Prior to joining ProPublica, he led digital production and engagement at WDET 101.9 FM, NPR’s affiliate in Detroit.

Messaging apps meet Journalism

There is an untapped distribution channel for publishers to create private, intimate and tailored relationships with the audiences they serve.

NBC News KIK

People have always had the desire to be informed. In the pre-industrial society, news was limited to conversations and gatherings. The arrival of pamphlets, edicts, ballads, journals and the first newssheets allowed information to spread regionally, while the Internet, social networks, smartphones and messaging apps instantaneously push data globally.

In particular, messaging apps also provide an untapped distribution channel for publishers to create private, intimate and tailored relationships with the audiences they serve — a “concierge” media service.

From search to social to messaging

Since the recent advent of the Internet, publishers have been trying to leverage distribution channels — such as search and social media — to grow their audiences. Huffington Post saw immense growth by mastering the art of search engine optimization and BuzzFeed capitalized on social sharing to become the media brand everyone talks about.

A significant opportunity for the next generation of media companies will come from knowing how to play in messaging apps. For journalists, brands and media creators, that means reimagining how they create content, how they engage users and how they generate revenue.

Messaging is ubiquitous in our lives. Apps can provide unique opportunities for giving audiences direct access to content and publishers, be it through tailor-made WeChat platforms or public chats on Viber and WhatsApp.

Why concierge media will become the way audiences consume content by 2020

1. Mobile messaging and chat apps will explode.

According to ComScore’s 2015 U.S. Mobile App Report, Millennials spent the majority of their time (over 90 hours a month) on smartphone apps. And guess what services they use the most? Messaging and social networks.

According to Global WebIndex, smartphone owners aren’t necessarily exclusive in choosing one specific messaging app. For example, 7 in 10 Snapchat users also use Facebook Messenger while 5 in 10 also use WhatsApp. People will be consuming content across chat platforms.

2. The leading publishers are starting to experiment with messaging content creation.

According to a recent Tow Center research report, The Wall Street Journal is among the news organizations that have more than a million subscribers on LINE, Japan’s largest messaging app, experimenting with different content and daily alerts and breaking news. BuzzFeed, a more recent entrant to the platform, has taken a very different approach by building audiences through push notifications, stickers and comics.

BBC News was the first to experiment with editorial content on WhatsApp in 2014, most notably with its WhatsApp “lifeline” information service targeting communities in West Africa affected by Ebola. However, WhatsApp is not engineered to work efficiently as a mass push-notification service, and much of BBC News’ recent strategy on the platform has focused on audience engagement through user-generated content and newsgathering.

Some publishers are also linking journalists salaries to performance on messaging apps in an effort to grow digital audiences. For example, the Times of India is making its contributors file a minimum of 3 WhatsApp alerts every week

How to start experimenting

It is important to remember that these platforms are still in their infancy. They are, in most cases, longer-term play, rather than ones that will bear immediate fruit if referrals and audience reach expectations. Concierge media builds on people’s intrinsic desire of wanting to follow others, their own curiosity and the private nature of conversations and higher level of intimacy.

With that said, a team of editors can create specialized channels and seed it with content updates native to those platforms using automatic messages. These include text messaging, alerts, stickers, trivia and emojis.

This seed content will engage members to participate in the discussion and elevate the level of the conversation by sharing unique perspectives. This model of journalism and media creation mimics what happens in the real world, where a person surfaces a topic and the group engages in an active exchange. This is what gives value to the story; the journalist is responsible to initiate the debate and moderate it. He or she can add value to the discussion by fact-checking or providing more information when needed.

The key takeaways

  • Concierge media on messaging apps can be built by developing hyper-specialized verticals across topics such as sports, entertainment, lifestyle and general news.
  • Messaging apps provide a space for news outlets to engage their audiences with different — and possibly lighter — types of content.
  • These verticals are managed to follow the interest levels of news stories and dynamically adapt to the news cycle. If a certain topic ceases to be meaningful, the editor of a certain channel needs to make a decision whether to sunset the channel, maintain it or create a sub-channel.
  • Automation and bots will be a central piece of the content publishing strategy.
  • Monetization opportunities will come from sales of digital goods such as stickers, e-commerce or micro-events such as paid live programming.
  • Concierge media could reinvent the concept of newsroom to be inherently mobile and global. Reporters would not have to come to a physical space.
  • A big opportunity exists to build contributor networks of individuals with specialized knowledge of niche topics.

With time spent on messaging apps exploding, will the next generation consume most of its content on the small screen? Learn more about Chat Apps on Tow Center’s latest report on Chat Apps.

Francesco Marconi is the Strategy Manager for The Associated Press and an innovation fellow at the Tow Center. 

Key Findings from our Guide to Automated Journalism

Below is a sneak peek at the findings from our new report: “Guide to Automated Journalism.” Keep an eye out for the full guide, launching January 7th, 2016 at 9:30 am eastern time.

For journalists

  • Human and automated journalism will likely become closely integrated and form a “man-machine marriage.”

  • Journalists are best advised to develop skills that algorithms cannot perform, such as in-depth analysis, interviewing, and investigative reporting.

  • Automated journalism will likely replace journalists who merely cover routine topics, but will also generate new jobs within the development of news-generating algorithms.

    For news consumers

  • People rate automated news as more credible than human-written news but do not particularly enjoy reading automated content.

  • Automated news is currently most suited for topics where providing facts in a quick and efficient way is more important than sophisticated narration, or where news did not exist previously and consumers thus have low expectations regarding the quality of the writing.

  • Little is known about news consumers’ demand for algorithmic transparency, such as whether they need (or want) to understand how algorithms work.

    For news organizations

  • Since algorithms cannot be held accountable for errors, liability for automated content will rest with a natural person (e.g., the journalist or the publisher).

  • Algorithmic transparency and accountability will become critical when errors occur, in particular when covering controversial topics and/or personalizing news.

  • Apart from basic guidelines that news organizations should follow when automatically generating news, little is known about which information should be made transparent regarding how the algorithms work.

    For society

  • Automated journalism will substantially increase the amount of available news, which will further increase people’s burden to find content that is most relevant to them.

  • An increase in automated—and, in particular, personalized—news is likely to reemphasize concerns about potential fragmentation of public opinion.

  • Little is known about potential implications for democracy if algorithms are to take over part of journalism’s role as a watchdog for government.

Reducing barriers between programmers and non-programmers in the newsroom

Muck, a research project sponsored by the Tow Center for Digital Journalism, seeks to design and prototype a system for authoring data-driven stories. The entire computational process behind producing a story should be clear, correct, and reproducible, for students and professionals alike. Build systems are a well-understood means of structuring such computations, and we wish to make this technique easy for data journalists. In so doing, we hope to expand the industry’s notion of what constitutes data journalism and reduce barriers between programmers and non-programmers in the newsroom.

Last month, we invited several working data journalists to the Center to talk about their processes and challenges working with software, as well as the institutional barriers separating “documentary” and “data” journalists in newsrooms. The discussion revealed several priorities for us to focus on: story driven process, strong support for iterative (and often messy) data transformation, and reproducibility. Above all, the system must be comprehensible for both professional programmers and less technical participants.

Start With Questions, Not Datasets

Data journalism exists to answer questions about the world that traditional journalistic techniques like interviewing and background research cannot resolve. But that does not mean that data alone makes a story. In fact, Noah Veltman of WNYC’s Data News Team warned against starting with a dataset:

“There’s a lot of noise in a giant dataset on a subject. You need to attack it with a question. Otherwise, you can get lost in endlessly summing and averaging data, looking for something interesting. If you just go fishing, you end up wasting a lot of time.”

Our guests agreed that their best data stories started with a question, often from reporters who don’t work on the data team. For example, ProPublica’s Olga Pierce explained that their Surgeon Scorecard story and app came to life because reporters covering medical mishaps wondered why surgeons seemed so unconcerned about accountability for their mistakes. When she looked, Pierce found that although some data on surgery outcomes existed, it was locked in proprietary datasets and obscure formats, making a more traditional investigation difficult.

Similarly, The Guardian’s The Counted project on U.S. police killings and WNYC’s Mean Streets project on New York traffic fatalities came from reporters questioning discrepancies between their published stories and official government fatality counts. In both cases, subsequent investigations into the public data found that official reports drastically undercount deaths. Both projects eventually became important examples of modern data journalism.

In all of these instances, documentary reporters came to their data teams when they ran into questions that traditional techniques could not answer. These examples encourage us to focus on helping journalists construct data analyses to answer their questions, rather than on tools for data exploration in the absence of questions. Often though, data journalists face major challenges before they can even begin an analysis.

Don’t Underestimate Data Manipulation

Usually, the first problem is finding the data in the first place. According to Veltman, they often have to improvise:

“A lot of the time you have to say, ‘Given the data we do have, we can’t answer the exact question, but what might be a reasonable proxy?’”

For Surgeon Scorecard, Pierce found that the best proxy available was a proprietary dataset of Medicare outcomes that ProPublica purchased for the story. Once she had the data, she still had to reduce it to a usable set that would accurately illustrate the problem. Due to idiosyncrasies in the way Medicare reports outcomes, she chose to narrow the set down to two specific negative outcomes: did the patient die in the hospital or return within 30 days? Then, she further filtered the data to include results from only eight types of elective surgeries where a patient would be able to choose a surgeon in the first place. Thus, a large portion of the work revolved around preparing the data for statistical analysis.

Further complicating this process, data journalists often use different tools to clean the data than those ultimately used to produce the story or app. This makes the analysis even less accessible to beginners, and can make it harder to stay focused on the larger story while analyzing data. To avoid this problem, Michael Keller of Al Jazeera advocated for using the same language to both analyze and output data:

“I’ve had success using Node [a server programming environment based on Javascript] to do data analysis…If I can do sketches in the same format as the published version, it helps, because it collapses the cognitive distance between the analysis and the narrative.”

Even if the journalist can use the same language to experiment as well as publish, the process still requires writing different pieces of code for each step, and steps may depend on each other in complicated ways. If a piece of code from an earlier step changes, developers often have to rewrite or at least manually re-run every subsequent step to incorporate those changes. This means that in practice, guaranteeing the validity of results is very challenging.

Remember Reproducibility and Auditability

Several participants said that programming in a deadline-driven environment drags developers into a mentality of ‘just get it to work,’ rather than working well. Guardian developer Rich Harris put it most succinctly, describing the typical process as writing “spaghetti code until it works.”

While “spaghetti code” (a programming term for poorly structured, tangled logic) may be the fastest way to meet a deadline, expediency often leads to practical problems with checking and verifying a story. By nature, most code is hard for anyone but the author to understand, and even experienced programmers admit to finding their own code inscrutable at times. Entire programming movements (literate programming is one of the more elaborate examples), have been developed to try to overcome this problem. In the newsroom, not only do programmers face deadline pressure, but an expectation that results, and by extension the integrity of the programs themselves, be verifiably accurate.

Many of our participants mentioned strategies for verifying their results. WNYC performs formal sanity checks on their projects to look for red flags, but there is rarely time for comprehensive code reviews. Sarah Cohen of the New York Times Computer Assisted Reporting desk said that her team keeps a journal documenting how they arrive at a given result. With Surgeon Scorecard, ProPublica’s Pierce took this concept a step further by asking other colleagues on the interactive team to reproduce her work based on her journal, providing “checkpoints” along the way to help them stay on track.

But in our discussion, two major shortcomings to these approaches emerged. The first is that editors outside of the data team rarely check conclusions, because the rest of the newsroom usually lacks the required analytical knowledge and/or the ability to read code. This disconnect between data journalists and traditional journalists makes verification expensive and time-consuming. For large, high-impact stories, WNYC performs full code reviews, and ProPublica brings in outside experts to validate conclusions. But as WNYC’s Veltman put it, “Deadlines usually leave no time for reviewing, much less refactoring code.”

The second shortcoming is that unless the record of modifications to the data is perfect, auditing work from end to end is impossible. Small manual fixes to bad data are almost always necessary; these transformations take place in spreadsheets or live programming sessions, not in properly versioned (or otherwise archived) programs. Several participants expressed concerns with tracking and managing data. These problems are compounded by the need to pass data back and forth between various participants, whose technical abilities vary.

Systems for sharing and versioning documents have existed for decades, but as the Times’ Cohen put it: “No matter what we do, at the end of a project the data is always in 30 versions of an Excel spreadsheet that got emailed back and forth, and the copy desk has to sort it all out…It’s what people know.”

Cohen’s observation speaks to a common thread running through all of our discussion topics: newsrooms are incredibly challenging environments to write code in. Not only do programmers face pressure to complete projects on a deadline, but the credibility of both the author and the publication rests on the accuracy of the results.

Our Direction: Muck

All of these themes have analogs within the broader software industry, and it is only natural to look for inspiration from accepted industry solutions. Broadly speaking, we characterize the basic process of data journalism as a transformation of input data into some output document.

For clarity and reproducibility, such transformations should be decomposed into simple, logical steps. A whole project then can be described as a network of steps, or, more precisely, a directed acyclic graph. The notion of a dependency graph is well understood in computer science, and a whole class of tools called build systems exist to take advantage of this representation.

Our approach is to create a build system for data journalism. We have designed simple conventions into Muck to make it easy for novices to start using; even non-programmers should be able to grasp the basic structure of Muck projects.

Because our goal is to assist in telling narrative stories, projects using Muck typically revolve around English-language documents, written in a Markdown-like syntax that compiles to HTML. New users can start writing in these formats immediately, and quickly output documents ready to be put online or into a CMS.

To manipulate data within the Muck framework, the programmer writes Python scripts that make their dependencies explicit to Muck. This relieves the programmer from having to worry about the complexities of dependencies, one of the more time-consuming and inscrutable parts of software projects. In contrast to traditional tools like Make, our system calculates the dependency relationships between files automatically. Moreover, it recognizes dependencies between source code and input data, so that as soon as any aspect of the project changes, it can rebuild just the affected parts.

Ideally, Muck would allow the programmer to use any language or tool of their choice, but in practice we cannot add support for all languages directly into the build tool. Instead, we plan to add support for popular tools incrementally, and also allow the programmer to manually specify dependencies for any other tools they wish to use.

To make projects easy for both programmers and non-programmers to understand, Muck operates through an interface familiar to nearly every computer user: the hierarchical file system. Each step of the overall computation is represented by a named file, and references to these names within files imply the overall dependency structure. This makes changing the structure as simple as renaming files, and encourages the programmer to choose descriptive names for each step. By using simple directory structures on the file system, the approach remains lightweight, flexible, and amenable to existing work habits, text editors, and industry standards.

We want Muck to be easy to learn in an academic setting and to implement in a newsroom. It should solve common problems that journalists face without adding too much overhead. By starting with writing in English and compiling to HTML, Muck makes getting started with a daat journalism project easy.

Build systems are goal-directed by nature, allowing the programmer to stay focused on the task at hand, while allowing them to add or rearrange goals at will as projects become more complex. Our tool is aware of a variety of file formats, allowing the author to easily switching between the narrative, analysis, and presentation aspects of the work. We hope that this fluidity will reduce the “cognitive distance” between tasks. We expect that once the process of creating discrete steps becomes second nature, the problem of reproducibility will disappear because any manual fixes to data will exist as just another step in the process.

Above all, Muck encourages simple, direct structuring of projects to encourage clarity, correctness, and reproducibility. All too often, the barriers to understanding software begin with cryptic names and hidden relationships. We believe that by emphasizing clear naming as the binding glue between elements of software, it will become more easily understood by everyone.

Engaging communities through solutions journalism

“In local news the only thing they report on are bad things, only negative things …they are not showing us how to change the community.”

“What I have to do is just block myself away from that. Shut the news up because it ain’t nothing but an ignorant box anyway.”

-South Los Angeles focus group discussion participants

How can a solutions-oriented approach to journalism affect communities where reporting tends to focus on crime, poverty, and other problems?

As Tow Fellows, we have been working in conjunction with the University of Southern California’s Metamorphosis Project, led by Dr. Sandra Ball-Rokeach, to understand how residents process news about where they live. We investigated local, community-engaged, solutions-oriented journalism in the context of South Los Angeles, an area with a long history of negative coverage.

Solutions-oriented journalism builds upon the concepts of peace journalism and civic journalism in highlighting responses to social problems and engaging residents in coverage. The strongest stories use the rigor of investigative reporting to explore systemic problems and critically examine efforts to address them that have the potential to be scaled. While they highlight positive outcomes, these are not simply ‘good news’ puff pieces.

In October and November, we conducted six focus group discussions with 48 African American and Latino South L.A. residents. Participants were invited to read either a solutions or non-solutions version of a story produced as part of our Watts Revisited collaboration, which worked with local media to report solutions-oriented stories about social issues in South LA. Moderators, who shared participants’ ethnic background, led discussions about attitudes and behaviors regarding local news and these particular stories.

The focus group participants offered insights into how residents of a stigmatized community navigate and interpret local coverage, and the opportunities and limitations of solutions journalism to engage these audiences. We will be releasing a full report in January, but our preliminary findings include:

  • Participants largely responded favorably to the problem-solving orientation of solutions journalism. “News needs to be an actual participant in what’s happening rather than just reporting on it…” one said. “It needs to be a part of the change.” But enthusiasm was tempered by concerns with the larger context of systemic inequality.
  • Participants described how given their distrust, particularly of local television, they valued online news and social media as a way to cross-check stories and and seek alternative community information.
  • Participants reported they would be more likely seek out news and share stories with friends and family if solutions-oriented stories were more common.

Other research has found similar results on a larger scale. The Solutions Journalism Network and the Engaging News Project have shown readers of solutions-oriented versions of stories indicated they are more likely than readers of traditional versions to want to seek out similar stories, share them on social media, and get involved in responses to problems. That research primarily used stories on a national and international level. Our project built on their work, asking how solutions journalism would be received at the local level—where community members have the greatest chance of effecting change.

Our project also, critically, builds upon research the Metamorphosis Project has been doing on the communication needs of residents in South L.A. and other diverse communities since 1998.  Researchers found that more cohesive communities tend to have stronger ‘storytelling networks’—that is, residents, local and ethnic media, and community organizations are connected to each other and share an understanding about what is happening in their community. Often in communities like South L.A., these networks become problematic when the link between organizations and media is weak, or the content of the stories circulating are overwhelmingly negative. Residents who connect to such storytelling networks tend to be less engaged and lack a sense of belonging. It was for this reason that the Metamorphosis project sought to connect local media with community organizations to produce a series of solutions-oriented stories.

We can offer the following recommendations for follow-up and additional research:

  • Expanding opportunities for resident and community organization involvement in various stages of the story development and dissemination process.
  • Inviting local television to participate would address concerns identified by focus group participants and expand project reach.
  • Creating more resources for local news. Local solutions journalism will only have limited success unless larger structural and resources issues within journalism are addressed. Local solutions journalism requires an investment of resources and time.
  • Cultivating reporters who come from the communities they report on—or at a minimum, enable reporters to embed themselves such that they are responsive to local sensitivities, foster trust and understand concerns regarding representation.

Additional research on local solutions journalism may further our understanding of the format’s potential:

  • Comparing the cumulative consumption of media diets that have either a greater number of solutions-oriented stories or more traditional stories.
  • Duplicating the current study in other areas of the same city to see how residents from different ethnic and class backgrounds may respond to stories which are in close proximity but concern an “other”.

Solutions-oriented journalism does not offer a magic bullet to engaging audiences either as media consumers or civic actors. However, we believe, particularly in communities with a long history of negative coverage, stories featuring community perspectives that take a critical look at responses to social problems offer an opportunity to strengthen connections between residents, media, and community organizations. At the end of all of our discussion sessions, participants asked us how they could learn more about the issues raised in these stories. Many wanted to get involved. We hope our study may offer some insights for other researchers, media, and community organizations as they explore how local news may become a more constructive actor in engaged and informed communities.

Tow Tea: Network Analysis for Investigations

“Everything is connected” may be a popular refrain, but as journalists we of course always want to know, “In what way?” This was the theme for the final Tow Tea of the 2015 fall semester, where Joe Karaganis from American Assembly, a public policy institute hosted by Columbia University, and Matthew Weber, network analyst and assistant professor of the School of Communication and Information at Rutgers University discussed their use of network analysis for research and investigation. In a packed room, the presentations and subsequent Q&A, moderated by Susan McGregor, assistant director of the Tow Center at Columbia Journalism School, centered around the concept and practical applications and problems of analyzing networks.

 

What kind of networks are good subjects for investigative analysis? As Weber explains, any kinds of group where members can be connected according to a commonality they share can be interrogated this way. Weber’s own research, for example, focuses on the news ecosystem and the way the Internet’s technological disruption altered that ecosystem during the last 20 years. Showing some of his recent work focusing on media organizations in New Jersey, Weber presented visualizations of the relationship between digital and traditional organizations in the local New Jersey news industry. An abstract map of colored cluster points representing publishers and news websites, modeled the spread and relative “distance” between the two. Whereas traditional newspapers and emergent news websites occupied clearly demarcated spaces in the media industry around 1998, the colored cluster points began to merge and mingle in a more integrated whole by 2006.

Networks

“Network maps are visualized data,” said Weber, referring to the double nature of such maps: at once visually legible enough to provide insight, yet abstract enough to display the entities or members in a group (“nodes”) and the relationship between members (“edges” or lines representing the relationship) in multiple ways.

 

Joe Karaganis, meanwhile, employs in practice what Weber researches: he uses network analysis to map slow or large-scale trends in public policy, markets, and intellectual property. Karaganis’s work showcases the various examples of network-based analysis in action, such as in the American Assembly’s Media Piracy Report on the systematic intellectual property theft in developing countries, or littlesis.org (as opposed to Big Brother), a grassroots watchdog organization which maps connections between members of the social elite and financial organizations in the United States, based on campaign finance data. Matching political candidates and recurring transactions from familiar donors are a key methodology in this project, which heavily relies on network analysis.

 

“How do you tell stories with huge datasets? Do you use clouds? Topography?” asks Karaganis with reference to the problem often met by investigative—and more particularly: data—journalists. While extensive datasets can make attractive sources due to their apparent empirical authority and the promise of computational analysis, it can be very difficult to present findings from data in such a way that the conclusions are meaningful to a broad audience.

 

The problematic relationship between big data and a good story is a recurring one in many of the Journalism School’s conferences and workshops throughout this last semester. As McGregor pointed out during the panel discussion: “You don’t need a p-value below 0.05 to write a good story.” But the flip-side, unfortunately, is also true: “Sometimes data doesn’t pan out into a story, [but] they get published anyway because of the pretty pictures,” said Karaganis.

 

Beyond the philosophical deliberation, data and the network maps based on data are increasingly useful—even indispensable—in a journalist’s work today. But how does one begin using network analysis?

 

Weber’s advice is to think of it first in simple terms. “For people in journalism: first ask right questions, find specific questions. For example: take legislation A and legislation B. Find the connections and record them in an Excel file. Excel then can calculate centrality.”

 

Measures of centrality—along with “distance,” “bridge,” and “degree”—are the basic conceptual building blocks of network analysis. What entities in a network are close to one other? Which member bridges two major clusters? What are the distances between particular nodes? These are the features of a networked group that can be computed and eventually visualized in a network.

 

Karaganis recommends kumu.io as an easy-to use free online tool for building visual presentations on networks. Weber suggests Excel and NODxL (a Windows-only add-on), which allows users to create lists of network data.

 

But open-source visualization tools are aplenty: Weber cites gephi.org, a beta-version of a free software which draws network maps based on imported data. The School of Communication and Information at Rutgers has also released its own piece of code for network analysis via github (aekeus). Finally, Karaganis advises students to take advantage of Lada Adamic’s (University of Michigan) elaborate tutorial on social network analysis on Coursera.
Network analysis is an increasingly popular way to approach data, more and more of which seems to be interconnected: financial transactions, social interactions, documents, court decisions and so on. But while computational tools are essential in performing this kind of analysis, Karaganis stressed that crafting good analyses and visualizations requires both subject knowledge and creativity: “At this point,” he said, “It’s more like an art than a science.”

A Guide to the Business of Podcasting: The Executive Summary

With this report, I have aimed to explain why podcasts matter to digital journalism: as our world shifts to mobile consumption, podcasts represent a mobile-first content that engages with audiences in ways that no other mobile medium previously has.

This guide provides a detailed overview of the current podcasting landscape, which is characterized by industry disruption, new networks, and increased podcast listening (especially on mobile devices) and awareness among consumers. As part of this overview, I describe the conceptual and technological challenges podcasting has to overcome if it is to achieve meaningful growth and industry legitimacy. I also briefly outline the challenges facing the industry as it looks to the future: the issues of iTunes as gatekeeper, the short-term and long-term efficacy of networks, the ethical dilemmas that native ads and branded content pose, and the need for more creativity and diversity in terms of content creation.

Podcasts are pursuing multiple revenue streams: advertising/sponsorship, foundation support, direct support, subscription models, and live events. While advertising is currently the fastest growing and most lucrative stream, these last three streams attempt to convert audience engagement and loyalty into recurring donations.

Because there is no “one size fits all” solution to generating revenue, and each podcast/company follows a different business model, it’s important to consider the operational philosophies that inform the ways podcasts and networks are raising revenue and prioritizing revenue streams. This guide explores the ways in which these philosophies play out in four case studies: PRX’s Reveal, Gimlet Media, Buzzfeed, and Panoply.

It is still too early to declare any podcast or podcast company a “success.” Many podcasts/networks currently rely heavily on advertising and are still experimenting with alternative revenue streams. Outside of advertising and branded content, podcasts show the most potential as an audience engagement tool that can diversify content, add value to brands/consumers, and generate enthusiasm for direct support and/or freemium models.

Home pages could become a thing of the past for media organizations

We could soon see the emergence of a a new wave of publishers that don’t require home pages or apps; their sole purpose is to syndicate content through different channels and social platforms.

Instant Articles GÇö Video Poster Frame

Since the advent of the Internet, publishers have been trying to leverage distribution channels — such as social media networks — to drive traffic to their own websites. Now, though, content can be hosted and monetized on these third-party platforms through services including Facebook’s Instant Articles or Snapchat’s Discover. As such, we can see the emergence of a new wave of “homeless” media companies that don’t require a home page; their sole purpose is to syndicate content.

The evolution of digital media distribution

Digital media companies have, to this point, generated revenue primarily through advertising displayed on their own websites. The amount of money earned corresponds at least indirectly with the size of the audience visiting their pages, so publishers turn to external platforms to build their brands. The Huffington Post and Drudge Report utilized search engine optimization to grow, while BuzzFeed, Vice and Vox are prime examples of sites utilizing social media channels to create “viral” content.

In fact, BuzzFeed founder and CEO Jonah Peretti said his company was “a global, cross-platform network for news and entertainment.” According to social analytics company NewsWhip, BuzzFeed earned more than 35 million combined Facebook likes, shares and comments in October alone.

From traffic acquisition to syndication

The goal of acquiring traffic via search and social is coming to an end for some, though. A few publishers are aiming to syndicate their content through various channels, not including a home page or website they own. They instead rely solely on the distribution and monetization from being on third-party platforms such as Facebook. A good example of a media company using this model is NowThis, which has built a successful content business model despite having a website that currently reads, “Homepage. Even the word sounds old. Today the news lives where you live.”

Why could “homeless” media become a trend

1. Mobile and app consumption is on the rise.

In June, mobile accounted for two out of every three minutes spent consuming digital media in the U.S., according to comScore. In 2014, mobile apps surpassed desktop as the leading digital media usage platform.

Also in June, according to comScore, total time spent consuming digital media via mobile apps reached close to 779 billion minutes, versus nearly 551 billion minutes on desktops. eMarketer (subscription required) highlights that users in the US spent 170 billion minutes on Facebook’s mobile app. It’s becoming clear, if it hasn’t already, where audiences are spending their time — social networks.

2. Third parties are creating platforms for publishers to easily distribute and monetize content.

Facebook’s Instant Articles and Snapchat’s Discover allow partnering publishers to directly reach growing audience bases with native content. For example, The Washington Post recently announced it would send 100 percent of its stories to Facebook so all content could be immediately accessible to users of the social network, with no loading time or paywalls to sift through. CNN has a dedicated team of one designer and two edit staffers specifically focused on creating content for Snapchat.

Additionally, Google recently announced its Accelerated Mobile Pages (AMP) Program to help publishers create mobile-optimized content and have it load instantly everywhere. Twitter has an opportunity to work with partners on its Moments feature, which presents a curated section of tweets related to the same topic.

Media companies can benefit by teaming up with these technology firms to create platform-specific content for the distribution channels with the largest audiences, potentially creating more engagement and revenue opportunities.

What does this mean for you?

If you are a journalist or executive at a traditional media company …

The rise of “homeless” media suggests a tipping point is at hand and that media companies will need to figure out how to approach it from both business and editorial perspectives. How much (and what) content should be sent to these channels? What’s the revenue potential in either case?

  • Even if you already have a big following, engagement remains the key. You will need to develop content verticals and unique programming that can attract new audiences.
  • Acquire or partner with smaller niche publications with distinctive voices that can augment your reach and inject innovation into your content.
  • Hire strategic partner managers to develop relationships with the platforms and get you on one of their publishing programs.

If you are an entrepreneur building the next big media brand …

  • Create distinctive content — build a niche audience that enables you to build a following.
  • Because you are not tied to any editorial guidelines, you have the opportunity to innovate on the format and build a following of people who are looking for new content.
  • Create a low-overhead business with minimal product development. Focus your resources on content creation and social distribution.
  • Hire people that have experience with platforms or that have worked at companies like Facebook, Twitter or Snapchat.

If you work at one of the big platforms …

  • Open up the native program to more publishers and provide support on best practices. Collaborate with publishers because the future of your business relies on great content.
  • Create a seamless experience for consumers to watch, read and experience content. You are the new “cable” so it will be important to create an uncluttered experience that centralizes the best content.
  • Share consumption data and help publishers (both new and legacy media) understand what content works so your users’ experience and content consumption can be refined over time.
  • Create opportunities for revenue sharing within the platform and build a safe environment for advertisers to invest their money.

If you are a venture capitalist looking for media businesses with growth potential …

  • Look for new media companies that have a distinctive voice and content that attracts a niche audience of engaged consumers.
  • Identify “content-first” companies with lean businesses and low overhead in product development, websites and mobile apps.
  • Help media startups in your portfolio develop strategic partnerships with larger media brands that are looking for niche content and new programming to grow their social following.

With native content consumption on third-party platforms growing, will it still be relevant for media companies to invest significant resources on running and maintaining their websites and mobile apps?

Francesco Marconi is the Strategy Manager for The Associated Press and a fellow at Tow Center. He writes about media, storytelling and innovation.

Tow Tea: FOIA Workshop

On Thursday, November 19, 2015, the Tow Center hosted a Tow Tea workshop on accessing and collecting data and information made available under the Freedom of Information Act (FOIA). Guests Shawn Musgrave, from The New England Center for Investigative Reporting’s and Nabiha Syed of BuzzFeed shared their tips and strategies for making FOIA requests.

A FOIA request is a legally binding request to a government agency for an *existing* record, says Syed. A record is data that people keep, so think broadly about it. Data sets count as records in most states, though sometimes FOIA officers don’t know that, and you have to persuade them.

At Buzzfeed, Syed oversees many FOIA requests and finds that robustly written requests get better responses, even though detailed explanations are not required by law.

One quick way to get information is to request a set of records that has been requested before – many offices publish lists of prior requests, says Musgrave. If you can determine who made the request, you can say somethibg like: “I want what ProPublica got for this article.”

Non-profit organizations and advocacy groups are also good resources when you are trying to figure out how to draft your request. Get in touch with them before you file and ask if they know anyone you can contact. As with other reporting methods, Musgrave notes, it helps to do your homework ahead of time – and for data requests in particular, it’s important to get an IT person involved if possible. The panelists both agreed that and being in phone contact with the people handling your request is also key.

Keeping track of your requests also matters. Syed suggests telephoning as soon as you file, just to let the office know you’ve submitted. Every statute has a time period during which the agency has to provide reasonable estimate of the time it will take to fulfill the request.

“Sometimes they might say it will take 20 days,” says Syed,but warned that 20 days is a magical number that is rarely achieved.

“Sometimes, you get a letter and the agency says they it will take to seven months – but you can call every month and ask what’s going on. If it takes a year, stay on top of it. I do FOIA Fridays,” says Syed.

If you simply don’t hear from an agency within the estimated period, says Musgrave, you do have the right to sue for “constructive denial” of the request, though this usually requires a lawyer’s advice.

While filing FOIAs for a particular story makes sense, says Musgrave, if you have a beat you cover, you may simply want to make requests in that topic area.

“If I find a public document referenced somewhere and can’t locate it, I’ll just file for it.”

While FOIA laws operate under a “presumption of openness,” it doesn’t mean you can get access to everything.

Syed pointed out that there are a range of exemption conditions under which your request can be denied. National security and privacy, are big ones, says Syed, but there are some surprising things like the locations of gas and oil rigs. Drafts of things that haven’t yet become policy are also exempt.
That said, one can always appeal a denial and challenge the exemption – you don’t have to take what the government says at face value.

Most of all, Syed and Musgrave agreed, it is important to keep in mind that people who respond to FOIA requests are human. Try to develop them as you would any other source and you will be much more successful.

Image Truth / Story Truth

On Friday, October 16, 2015, the Tow Center hosted Image Truth/Story Truth, a conference by Nina Berman and Gary Knight, in collaboration with the Brown Institute for Media Innovation.

The conference explored the ubiquity of the image in the digital age, and allowed participants from different disciplines and professions to come together and discuss the norms and ethics around photojournalism today.

The program can be viewed here.

Nina Berman, professor at Columbia Journalism School wrote about the conference:

When talking about photojournalism ethics, the conversation tends to focus on the integrity of the digital image and the rules governing Photoshop manipulation. Photojournalists are prohibited from adding or deleting objects or people from their pictures, or combining two pictures together and passing it off as one moment. Those who break these conventions are dismissed from employment. The degree of acceptable toning and use of filters to enhance contrast and color varies widely by publication.

The Image Truth/Story Truth conference aimed to broaden the debate around ethics and direct it away from pixels and post processing, towards representation, context and commissioning.

Are photojournalists creating images that repeat certain visual tropes and perpetuate social stereotypes? Do contests such as World Press Photo and the Pulitzer Prize, reinforce those stereotypes by consistently awarding work that focuses on the dramatic individuation of suffering and the search for the iconic moment?

Is it time to dispense with the catchwords of yesterday that focus on humanizing subjects (as though they were ever less than human), or giving voice to the voiceless, language steeped in hierarchy and outdated notions of narrative privilege?

Given the complexity of contemporary conflict, should pictures do more than provoke emotional reactions? Is it enough to simply wait for disasters to happen and then make gorgeous images of those disasters, as one panelist asked? Can a deeper form of documentation and witnessing take place that looks less to the dramatic moment, and more to causes and context? Can new technologies help or distract? Is a new visual language required?

And finally, what is the purpose of photojournalism? Is it to record? Or to advocate? Is it illustrative or investigative? Detached or collaborative? Can work produced within a corporate commercial context be anything but conformist? Is work commissioned by NGOs more true or just a different kind of sell?

Image Truth/Story Truth – an intentionally ambitious title – predictably presented no conclusions. Rather, the purpose was to put highly accomplished people together who don’t normally converse, industry leaders with academics, curators and critics, and see what develops.

-Nina Berman

The day was comprised of five panels, ranging in topic from a deep-dive into the ethics around the World Press Photo Awards to a panel that explored the narratives that develop around particular images in a socio-political context. Video of the five panels can be viewed below:

Panel 1: Introduction and Contests and Ethics: The World Press Photo Award

Panel 2: World Press Photo Response

Panel 3: Politics of the Image and the Constructed Event

Panel 4: The Press and Photography

Panel 5: What is a Photograph?  The Future of Photography and the Professional Image-Maker

Digital 8bit House
http://materinstwo.com/uzi-pri-beremennosti/320-transvaginalnaya-ehografiya-v-pervom-trimestre-beremennosti.html